Oct 14 09:56:57 crc systemd[1]: Starting Kubernetes Kubelet... Oct 14 09:56:57 crc restorecon[4672]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:57 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 09:56:58 crc restorecon[4672]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 14 09:56:58 crc kubenswrapper[4698]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.764803 4698 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767963 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767978 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767983 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767987 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767991 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767995 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.767999 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768002 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768006 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768010 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768015 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768019 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768023 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768027 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768032 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768039 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768044 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768048 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768052 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768055 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768060 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768064 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768068 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768071 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768076 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768082 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768086 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768089 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768093 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768096 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768100 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768103 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768107 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768110 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768114 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768117 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768121 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768124 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768129 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768134 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768137 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768141 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768145 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768149 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768152 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768157 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768160 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768164 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768168 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768171 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768175 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768179 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768182 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768185 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768189 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768193 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768196 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768199 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768203 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768206 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768209 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768213 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768216 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768220 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768223 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768228 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768232 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768237 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768241 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768245 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.768249 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768808 4698 flags.go:64] FLAG: --address="0.0.0.0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768823 4698 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768832 4698 flags.go:64] FLAG: --anonymous-auth="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768839 4698 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768844 4698 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768848 4698 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768854 4698 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768859 4698 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768863 4698 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768868 4698 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768872 4698 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768876 4698 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768880 4698 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768885 4698 flags.go:64] FLAG: --cgroup-root="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768888 4698 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768893 4698 flags.go:64] FLAG: --client-ca-file="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768898 4698 flags.go:64] FLAG: --cloud-config="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768902 4698 flags.go:64] FLAG: --cloud-provider="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768906 4698 flags.go:64] FLAG: --cluster-dns="[]" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768910 4698 flags.go:64] FLAG: --cluster-domain="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768914 4698 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768918 4698 flags.go:64] FLAG: --config-dir="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768922 4698 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768926 4698 flags.go:64] FLAG: --container-log-max-files="5" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768931 4698 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768935 4698 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768939 4698 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768944 4698 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768948 4698 flags.go:64] FLAG: --contention-profiling="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768952 4698 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768956 4698 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768960 4698 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768964 4698 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768969 4698 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768973 4698 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768977 4698 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768981 4698 flags.go:64] FLAG: --enable-load-reader="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768985 4698 flags.go:64] FLAG: --enable-server="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768989 4698 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768995 4698 flags.go:64] FLAG: --event-burst="100" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.768999 4698 flags.go:64] FLAG: --event-qps="50" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769003 4698 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769007 4698 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769011 4698 flags.go:64] FLAG: --eviction-hard="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769016 4698 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769020 4698 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769024 4698 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769028 4698 flags.go:64] FLAG: --eviction-soft="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769032 4698 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769036 4698 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769040 4698 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769045 4698 flags.go:64] FLAG: --experimental-mounter-path="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769051 4698 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769055 4698 flags.go:64] FLAG: --fail-swap-on="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769059 4698 flags.go:64] FLAG: --feature-gates="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769065 4698 flags.go:64] FLAG: --file-check-frequency="20s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769070 4698 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769074 4698 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769078 4698 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769082 4698 flags.go:64] FLAG: --healthz-port="10248" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769086 4698 flags.go:64] FLAG: --help="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769090 4698 flags.go:64] FLAG: --hostname-override="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769094 4698 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769098 4698 flags.go:64] FLAG: --http-check-frequency="20s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769102 4698 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769106 4698 flags.go:64] FLAG: --image-credential-provider-config="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769110 4698 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769114 4698 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769118 4698 flags.go:64] FLAG: --image-service-endpoint="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769122 4698 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769126 4698 flags.go:64] FLAG: --kube-api-burst="100" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769130 4698 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769134 4698 flags.go:64] FLAG: --kube-api-qps="50" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769139 4698 flags.go:64] FLAG: --kube-reserved="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769143 4698 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769147 4698 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769151 4698 flags.go:64] FLAG: --kubelet-cgroups="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769155 4698 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769159 4698 flags.go:64] FLAG: --lock-file="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769162 4698 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769167 4698 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769170 4698 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769176 4698 flags.go:64] FLAG: --log-json-split-stream="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769180 4698 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769184 4698 flags.go:64] FLAG: --log-text-split-stream="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769188 4698 flags.go:64] FLAG: --logging-format="text" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769192 4698 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769196 4698 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769201 4698 flags.go:64] FLAG: --manifest-url="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769205 4698 flags.go:64] FLAG: --manifest-url-header="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769210 4698 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769214 4698 flags.go:64] FLAG: --max-open-files="1000000" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769219 4698 flags.go:64] FLAG: --max-pods="110" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769223 4698 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769227 4698 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769231 4698 flags.go:64] FLAG: --memory-manager-policy="None" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769235 4698 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769239 4698 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769245 4698 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769249 4698 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769260 4698 flags.go:64] FLAG: --node-status-max-images="50" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769264 4698 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769268 4698 flags.go:64] FLAG: --oom-score-adj="-999" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769272 4698 flags.go:64] FLAG: --pod-cidr="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769276 4698 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769282 4698 flags.go:64] FLAG: --pod-manifest-path="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769286 4698 flags.go:64] FLAG: --pod-max-pids="-1" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769290 4698 flags.go:64] FLAG: --pods-per-core="0" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769294 4698 flags.go:64] FLAG: --port="10250" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769298 4698 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769302 4698 flags.go:64] FLAG: --provider-id="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769305 4698 flags.go:64] FLAG: --qos-reserved="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769309 4698 flags.go:64] FLAG: --read-only-port="10255" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769313 4698 flags.go:64] FLAG: --register-node="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769317 4698 flags.go:64] FLAG: --register-schedulable="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769321 4698 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769328 4698 flags.go:64] FLAG: --registry-burst="10" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769332 4698 flags.go:64] FLAG: --registry-qps="5" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769336 4698 flags.go:64] FLAG: --reserved-cpus="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769339 4698 flags.go:64] FLAG: --reserved-memory="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769344 4698 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769348 4698 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769352 4698 flags.go:64] FLAG: --rotate-certificates="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769356 4698 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769360 4698 flags.go:64] FLAG: --runonce="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769364 4698 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769369 4698 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769373 4698 flags.go:64] FLAG: --seccomp-default="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769376 4698 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769380 4698 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769384 4698 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769389 4698 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769395 4698 flags.go:64] FLAG: --storage-driver-password="root" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769399 4698 flags.go:64] FLAG: --storage-driver-secure="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769403 4698 flags.go:64] FLAG: --storage-driver-table="stats" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769407 4698 flags.go:64] FLAG: --storage-driver-user="root" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769411 4698 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769415 4698 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769419 4698 flags.go:64] FLAG: --system-cgroups="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769423 4698 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769429 4698 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769432 4698 flags.go:64] FLAG: --tls-cert-file="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769436 4698 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769441 4698 flags.go:64] FLAG: --tls-min-version="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769445 4698 flags.go:64] FLAG: --tls-private-key-file="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769449 4698 flags.go:64] FLAG: --topology-manager-policy="none" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769453 4698 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769457 4698 flags.go:64] FLAG: --topology-manager-scope="container" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769461 4698 flags.go:64] FLAG: --v="2" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769466 4698 flags.go:64] FLAG: --version="false" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769471 4698 flags.go:64] FLAG: --vmodule="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769476 4698 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769481 4698 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769569 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769574 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769578 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769581 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769586 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769590 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769594 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769598 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769601 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769605 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769611 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769617 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769621 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769625 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769628 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769632 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769635 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769639 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769642 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769645 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769649 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769652 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769656 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769659 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769662 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769666 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769669 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769674 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769678 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769682 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769686 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769689 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769694 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769698 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769702 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769706 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769710 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769714 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769717 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769721 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769725 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769728 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769734 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769740 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769744 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769747 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769752 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769756 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769774 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769779 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769785 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769790 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769794 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769800 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769805 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769809 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769813 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769817 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769822 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769826 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769830 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769834 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769838 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769842 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769846 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769850 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769854 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769859 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769863 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769868 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.769872 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.769879 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.780481 4698 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.780522 4698 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780608 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780618 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780624 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780630 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780637 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780643 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780649 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780655 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780660 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780666 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780672 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780677 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780683 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780689 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780694 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780699 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780704 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780709 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780714 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780719 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780725 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780730 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780736 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780741 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780746 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780751 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780756 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780776 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780782 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780788 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780796 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780805 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780811 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780817 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780822 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780828 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780833 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780838 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780843 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780848 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780853 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780858 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780863 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780868 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780873 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780878 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780884 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780889 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780895 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780900 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780905 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780911 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780916 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780921 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780928 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780934 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780939 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780945 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780951 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780956 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780961 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780967 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780975 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780981 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780987 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.780993 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781000 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781006 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781012 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781018 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781023 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.781032 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781180 4698 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781188 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781194 4698 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781200 4698 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781205 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781212 4698 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781219 4698 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781225 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781230 4698 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781235 4698 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781240 4698 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781245 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781251 4698 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781256 4698 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781261 4698 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781267 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781272 4698 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781278 4698 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781283 4698 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781289 4698 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781294 4698 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781299 4698 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781305 4698 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781310 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781315 4698 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781320 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781326 4698 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781330 4698 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781335 4698 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781340 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781346 4698 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781351 4698 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781356 4698 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781361 4698 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781366 4698 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781371 4698 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781376 4698 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781381 4698 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781386 4698 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781391 4698 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781396 4698 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781401 4698 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781407 4698 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781413 4698 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781419 4698 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781425 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781430 4698 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781436 4698 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781442 4698 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781447 4698 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781453 4698 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781459 4698 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781466 4698 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781471 4698 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781477 4698 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781482 4698 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781489 4698 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781494 4698 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781501 4698 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781506 4698 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781511 4698 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781517 4698 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781522 4698 feature_gate.go:330] unrecognized feature gate: Example Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781527 4698 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781533 4698 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781538 4698 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781543 4698 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781549 4698 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781556 4698 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781561 4698 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.781567 4698 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.781575 4698 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.782428 4698 server.go:940] "Client rotation is on, will bootstrap in background" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.788366 4698 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.788468 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.791893 4698 server.go:997] "Starting client certificate rotation" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.791913 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.793023 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 11:33:23.342509686 +0000 UTC Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.793174 4698 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 697h36m24.549344281s for next certificate rotation Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.823090 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.825935 4698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.848827 4698 log.go:25] "Validated CRI v1 runtime API" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.888458 4698 log.go:25] "Validated CRI v1 image API" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.892358 4698 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.898304 4698 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-14-09-52-40-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.898349 4698 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.915901 4698 manager.go:217] Machine: {Timestamp:2025-10-14 09:56:58.913248297 +0000 UTC m=+0.610547743 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8e872109-adee-4b6d-91bf-d9ced28af93f BootID:ec98e803-8937-4fdb-8662-3488c6a305f2 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1b:f7:9e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1b:f7:9e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:76:d1:96 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:98:39:69 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f6:8c:95 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a9:81:56 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:a5:92:b5:83:5a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ae:b5:b8:a1:58:45 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.916234 4698 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.916354 4698 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.918967 4698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919194 4698 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919232 4698 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919447 4698 topology_manager.go:138] "Creating topology manager with none policy" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919460 4698 container_manager_linux.go:303] "Creating device plugin manager" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919932 4698 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.919970 4698 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.920125 4698 state_mem.go:36] "Initialized new in-memory state store" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.920227 4698 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.925547 4698 kubelet.go:418] "Attempting to sync node with API server" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.925571 4698 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.925588 4698 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.925606 4698 kubelet.go:324] "Adding apiserver pod source" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.925621 4698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.931689 4698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.933273 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.933417 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.933523 4698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.933594 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.933808 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.935379 4698 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937199 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937243 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937258 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937273 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937295 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937308 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937322 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937344 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937361 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937375 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937403 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.937422 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.940213 4698 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.940868 4698 server.go:1280] "Started kubelet" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.941486 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.941964 4698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.941992 4698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 14 09:56:58 crc systemd[1]: Started Kubernetes Kubelet. Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.942589 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.942616 4698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.942829 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:18:48.892600422 +0000 UTC Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.942863 4698 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 778h21m49.949739739s for next certificate rotation Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.943000 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.943060 4698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.943041 4698 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.943097 4698 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 14 09:56:58 crc kubenswrapper[4698]: W1014 09:56:58.943658 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.943728 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.943826 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.943884 4698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.946335 4698 server.go:460] "Adding debug handlers to kubelet server" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.946578 4698 factory.go:55] Registering systemd factory Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.946603 4698 factory.go:221] Registration of the systemd container factory successfully Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.948917 4698 factory.go:153] Registering CRI-O factory Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.948936 4698 factory.go:221] Registration of the crio container factory successfully Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.948997 4698 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.949021 4698 factory.go:103] Registering Raw factory Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.949033 4698 manager.go:1196] Started watching for new ooms in manager Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.949460 4698 manager.go:319] Starting recovery of all containers Oct 14 09:56:58 crc kubenswrapper[4698]: E1014 09:56:58.946732 4698 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e530cd5d79e23 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 09:56:58.940833315 +0000 UTC m=+0.638132742,LastTimestamp:2025-10-14 09:56:58.940833315 +0000 UTC m=+0.638132742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.959185 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.959266 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.959285 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.959303 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.959319 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.960958 4698 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961000 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961048 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961067 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961087 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961104 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961156 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961174 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961192 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961213 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961230 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961246 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961265 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961320 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961338 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961355 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961370 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961404 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961420 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961436 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961454 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961470 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961491 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961510 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961526 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961541 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961555 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961572 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961591 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961609 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961624 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961639 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961658 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961675 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961690 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961705 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961721 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961736 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961753 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961789 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961809 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961825 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961840 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961855 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961872 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961888 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961905 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961919 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961972 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.961992 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962008 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962025 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962043 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962059 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962076 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962092 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962108 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962123 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962140 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962158 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962176 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962197 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962217 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962233 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962249 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962267 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962284 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962300 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962317 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962334 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962350 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962367 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962382 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962398 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962415 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962431 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962448 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962464 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962478 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962494 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962509 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962523 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962538 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962553 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962571 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962586 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962602 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962619 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962637 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962653 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962667 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962684 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962700 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962720 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962736 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962751 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962811 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962831 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962848 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962863 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962886 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962904 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962922 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962942 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.962990 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963010 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963029 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963047 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963066 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963084 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963134 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963151 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963169 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963189 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963204 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963221 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963237 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963254 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963268 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963283 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963299 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963315 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963330 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963345 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963363 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963382 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963398 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963414 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963429 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963446 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963460 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963476 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963494 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963509 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963525 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963541 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963557 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963571 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963586 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963601 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963618 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963634 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963649 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963665 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963679 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963694 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963708 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963723 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963739 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963753 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963789 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963808 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963825 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963841 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963857 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963873 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963891 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963908 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963924 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963940 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963956 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963972 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.963989 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964005 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964022 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964038 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964054 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964072 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964087 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964103 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964118 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964133 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964151 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964168 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964183 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964201 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964218 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964235 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964251 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964268 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964285 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964301 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964317 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964334 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964350 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964365 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964381 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964397 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964416 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964440 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964454 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964472 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964488 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964505 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964530 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964547 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964563 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964578 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964593 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964607 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964623 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964638 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964654 4698 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964671 4698 reconstruct.go:97] "Volume reconstruction finished" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.964682 4698 reconciler.go:26] "Reconciler: start to sync state" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.973087 4698 manager.go:324] Recovery completed Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.985398 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.987562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.987601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.987615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.989007 4698 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.989119 4698 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 14 09:56:58 crc kubenswrapper[4698]: I1014 09:56:58.989203 4698 state_mem.go:36] "Initialized new in-memory state store" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.001838 4698 policy_none.go:49] "None policy: Start" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.002925 4698 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.002953 4698 state_mem.go:35] "Initializing new in-memory state store" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.012867 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.015380 4698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.015514 4698 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.015623 4698 kubelet.go:2335] "Starting kubelet main sync loop" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.015728 4698 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.017534 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.017609 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.043867 4698 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.067413 4698 manager.go:334] "Starting Device Plugin manager" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.067692 4698 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.067710 4698 server.go:79] "Starting device plugin registration server" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.068207 4698 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.068245 4698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.068488 4698 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.068618 4698 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.068627 4698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.076445 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.116796 4698 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.116898 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.118156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.118185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.118195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.118712 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.119060 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.119145 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120540 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120696 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120748 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.120930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.121975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.121991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122325 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122487 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.122536 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.123935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.123983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.123996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124156 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124247 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.124292 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.125898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.126141 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.126206 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.127190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.127219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.127231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.144527 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.167881 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168173 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168235 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168256 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168273 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168290 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168309 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168381 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.168503 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.169395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.169424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.169435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.169459 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.169690 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269608 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269644 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269756 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269759 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269854 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269889 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269875 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269938 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269956 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.269983 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270000 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270045 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270023 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270069 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270044 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270107 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.270298 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.281107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.324611 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2fdd243e5671380059d191e640c76135746d29a987772065547da913f4f62ebe WatchSource:0}: Error finding container 2fdd243e5671380059d191e640c76135746d29a987772065547da913f4f62ebe: Status 404 returned error can't find the container with id 2fdd243e5671380059d191e640c76135746d29a987772065547da913f4f62ebe Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.370687 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.372891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.372984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.373009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.373101 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.373944 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.463701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.476291 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.481185 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ebe6f496e4495409632147db59468e77b9a1530c141c6a69ad5fcb35fff09b6a WatchSource:0}: Error finding container ebe6f496e4495409632147db59468e77b9a1530c141c6a69ad5fcb35fff09b6a: Status 404 returned error can't find the container with id ebe6f496e4495409632147db59468e77b9a1530c141c6a69ad5fcb35fff09b6a Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.493135 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a340160dc5e6326652683d3839909773d5f603f5a200d67924b074882fc09546 WatchSource:0}: Error finding container a340160dc5e6326652683d3839909773d5f603f5a200d67924b074882fc09546: Status 404 returned error can't find the container with id a340160dc5e6326652683d3839909773d5f603f5a200d67924b074882fc09546 Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.528303 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.540958 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2e39e40a8afe751254d97e1ccc458d3e90daf0859dd6621e59816fcd8cc64bc7 WatchSource:0}: Error finding container 2e39e40a8afe751254d97e1ccc458d3e90daf0859dd6621e59816fcd8cc64bc7: Status 404 returned error can't find the container with id 2e39e40a8afe751254d97e1ccc458d3e90daf0859dd6621e59816fcd8cc64bc7 Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.545302 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.568325 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.582125 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6a72c993acb77da952fc7681ba4fbb4a23dd877c17d8b95ee56c2aabcece139a WatchSource:0}: Error finding container 6a72c993acb77da952fc7681ba4fbb4a23dd877c17d8b95ee56c2aabcece139a: Status 404 returned error can't find the container with id 6a72c993acb77da952fc7681ba4fbb4a23dd877c17d8b95ee56c2aabcece139a Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.774296 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.775966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.776031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.776057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.776098 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.776580 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.800115 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.800252 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.832600 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.832700 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:59 crc kubenswrapper[4698]: W1014 09:56:59.922521 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:56:59 crc kubenswrapper[4698]: E1014 09:56:59.922633 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:56:59 crc kubenswrapper[4698]: I1014 09:56:59.942389 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.020972 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.021078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6a72c993acb77da952fc7681ba4fbb4a23dd877c17d8b95ee56c2aabcece139a"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.022286 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5" exitCode=0 Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.022336 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.022357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2e39e40a8afe751254d97e1ccc458d3e90daf0859dd6621e59816fcd8cc64bc7"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.022456 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.023431 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.023456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.023468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.024630 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="404ded82352fdad25893ceb43ac9099d5539d7f216cb26dd648d765790891367" exitCode=0 Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.024670 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"404ded82352fdad25893ceb43ac9099d5539d7f216cb26dd648d765790891367"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.024683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a340160dc5e6326652683d3839909773d5f603f5a200d67924b074882fc09546"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.024752 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.025962 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.027966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.027976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.027985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.027995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.027999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.028004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.029807 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2827359bbda868834d7af92972382d98a93c2acfc73571418bf244abcd47c2c8" exitCode=0 Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.029864 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2827359bbda868834d7af92972382d98a93c2acfc73571418bf244abcd47c2c8"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.029888 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ebe6f496e4495409632147db59468e77b9a1530c141c6a69ad5fcb35fff09b6a"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.030011 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.030887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.030915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.030924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031212 4698 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b" exitCode=0 Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2fdd243e5671380059d191e640c76135746d29a987772065547da913f4f62ebe"} Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031329 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.031984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: W1014 09:57:00.059869 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:57:00 crc kubenswrapper[4698]: E1014 09:57:00.059977 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 14 09:57:00 crc kubenswrapper[4698]: E1014 09:57:00.346233 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.577124 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.579801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.579841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.579853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.579881 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:57:00 crc kubenswrapper[4698]: E1014 09:57:00.580299 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 14 09:57:00 crc kubenswrapper[4698]: I1014 09:57:00.942646 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.041414 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.041459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.041468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.041476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.043541 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="109096eccaa25562122783d0c9496ba173af1e5050308752a1153755f264abe0" exitCode=0 Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.043598 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"109096eccaa25562122783d0c9496ba173af1e5050308752a1153755f264abe0"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.043715 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.045025 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.045067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.045079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.046853 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d7fbbcdbffb44a782f00ba7a7f2c25834a8c584a58be76e6452d90107abd977c"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.046965 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.047978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.048025 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.048041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.048964 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.048989 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.048999 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.049076 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.049823 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.049860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.049873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.052659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.052684 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.052696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2"} Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.052856 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.054626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.054670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:01 crc kubenswrapper[4698]: I1014 09:57:01.054696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.059642 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950"} Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.059744 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.060747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.060889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.060997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.062587 4698 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9416761c72574fe792091fc2055a53e5b2ac6c0718936ec4429cfff902ffa7c4" exitCode=0 Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.062696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9416761c72574fe792091fc2055a53e5b2ac6c0718936ec4429cfff902ffa7c4"} Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.062959 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.063143 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.063171 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.064107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.064164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.064187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.065390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.181247 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.182932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.182968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.182978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.183002 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:57:02 crc kubenswrapper[4698]: I1014 09:57:02.214648 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf4cc5e663b2603ef122c34fd9ef5407a2a9b394fa1d56ad1731f48f35a5fbcb"} Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070803 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bcad14c11c691da3e128c018c0f56e25374cae878d0baa71cf05d0ecd6706202"} Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070870 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"72b7df3096e2ed4b22b5ac1e71f3c073458fa9512b2b42842a445024f865ccb8"} Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070787 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.070954 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.071809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.071868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.071887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.072307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.072344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:03 crc kubenswrapper[4698]: I1014 09:57:03.072355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.002842 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.015992 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.078748 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6ee0d5ffc3cd2cd2a9861ccd00a2b976cd647c5f2afb286bb97043feb742959a"} Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.078860 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5303a556c9bfa83d2709deef5346c0ff27c9ff2c2117220182e70603555e6dd7"} Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.078864 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.078951 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.078965 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.080589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.256043 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.256259 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.258203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.258261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.258302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.266683 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:04 crc kubenswrapper[4698]: I1014 09:57:04.331878 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.081292 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.081385 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.081441 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.081477 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.082606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.082682 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.082708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:05 crc kubenswrapper[4698]: I1014 09:57:05.083855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:06 crc kubenswrapper[4698]: I1014 09:57:06.084052 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:06 crc kubenswrapper[4698]: I1014 09:57:06.085558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:06 crc kubenswrapper[4698]: I1014 09:57:06.085616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:06 crc kubenswrapper[4698]: I1014 09:57:06.085627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:07 crc kubenswrapper[4698]: I1014 09:57:07.493531 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:07 crc kubenswrapper[4698]: I1014 09:57:07.493810 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:07 crc kubenswrapper[4698]: I1014 09:57:07.495514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:07 crc kubenswrapper[4698]: I1014 09:57:07.495575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:07 crc kubenswrapper[4698]: I1014 09:57:07.495594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:08 crc kubenswrapper[4698]: I1014 09:57:08.643077 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:57:08 crc kubenswrapper[4698]: I1014 09:57:08.643289 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:08 crc kubenswrapper[4698]: I1014 09:57:08.644534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:08 crc kubenswrapper[4698]: I1014 09:57:08.644607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:08 crc kubenswrapper[4698]: I1014 09:57:08.644634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.018400 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.018621 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.020157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.020220 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.020238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.026118 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:09 crc kubenswrapper[4698]: E1014 09:57:09.076549 4698 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.094889 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.097342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.097382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:09 crc kubenswrapper[4698]: I1014 09:57:09.097391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:10 crc kubenswrapper[4698]: I1014 09:57:10.403319 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:10 crc kubenswrapper[4698]: I1014 09:57:10.403553 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:10 crc kubenswrapper[4698]: I1014 09:57:10.405385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:10 crc kubenswrapper[4698]: I1014 09:57:10.405437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:10 crc kubenswrapper[4698]: I1014 09:57:10.405456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:11 crc kubenswrapper[4698]: I1014 09:57:11.943241 4698 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 14 09:57:11 crc kubenswrapper[4698]: E1014 09:57:11.947375 4698 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.107809 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.109722 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950" exitCode=255 Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.109820 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950"} Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.110039 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.111050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.111115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.111130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.111819 4698 scope.go:117] "RemoveContainer" containerID="516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950" Oct 14 09:57:12 crc kubenswrapper[4698]: E1014 09:57:12.184204 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Oct 14 09:57:12 crc kubenswrapper[4698]: W1014 09:57:12.413441 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.413533 4698 trace.go:236] Trace[106532340]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 09:57:02.411) (total time: 10001ms): Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[106532340]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:57:12.413) Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[106532340]: [10.00182751s] [10.00182751s] END Oct 14 09:57:12 crc kubenswrapper[4698]: E1014 09:57:12.413555 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 09:57:12 crc kubenswrapper[4698]: W1014 09:57:12.589573 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.589718 4698 trace.go:236] Trace[697358832]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 09:57:02.587) (total time: 10002ms): Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[697358832]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:57:12.589) Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[697358832]: [10.002008092s] [10.002008092s] END Oct 14 09:57:12 crc kubenswrapper[4698]: E1014 09:57:12.589749 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 09:57:12 crc kubenswrapper[4698]: W1014 09:57:12.724699 4698 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.724882 4698 trace.go:236] Trace[937585561]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 09:57:02.722) (total time: 10002ms): Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[937585561]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:57:12.724) Oct 14 09:57:12 crc kubenswrapper[4698]: Trace[937585561]: [10.002015557s] [10.002015557s] END Oct 14 09:57:12 crc kubenswrapper[4698]: E1014 09:57:12.724925 4698 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.757705 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.757838 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.763879 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 09:57:12 crc kubenswrapper[4698]: I1014 09:57:12.763945 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.114982 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.116825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e"} Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.116989 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.117889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.117923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.117935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.404150 4698 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.404322 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.405506 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.405807 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.407194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.407227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.407238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:13 crc kubenswrapper[4698]: I1014 09:57:13.467347 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.009160 4698 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]log ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]etcd ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-apiextensions-informers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/crd-informer-synced ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/bootstrap-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-registration-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]autoregister-completion ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 09:57:14 crc kubenswrapper[4698]: livez check failed Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.009223 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.119049 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.120755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.120853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.120882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:14 crc kubenswrapper[4698]: I1014 09:57:14.132625 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.121107 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.122055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.122128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.122148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.384509 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.386515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.386571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.386589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:15 crc kubenswrapper[4698]: I1014 09:57:15.386624 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:57:15 crc kubenswrapper[4698]: E1014 09:57:15.391097 4698 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 14 09:57:16 crc kubenswrapper[4698]: I1014 09:57:16.742422 4698 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 14 09:57:16 crc kubenswrapper[4698]: I1014 09:57:16.834904 4698 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.494361 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.494525 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.495606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.495654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.495668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.760059 4698 trace.go:236] Trace[885988607]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 09:57:02.763) (total time: 14996ms): Oct 14 09:57:17 crc kubenswrapper[4698]: Trace[885988607]: ---"Objects listed" error: 14996ms (09:57:17.759) Oct 14 09:57:17 crc kubenswrapper[4698]: Trace[885988607]: [14.996867511s] [14.996867511s] END Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.760105 4698 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 14 09:57:17 crc kubenswrapper[4698]: I1014 09:57:17.760222 4698 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.321202 4698 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.939011 4698 apiserver.go:52] "Watching apiserver" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.947897 4698 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.948297 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8rj7q","openshift-image-registry/node-ca-pfxrp","openshift-machine-config-operator/machine-config-daemon-lp4sk","openshift-multus/multus-b7cbk","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-multus/multus-additional-cni-plugins-5twvn","openshift-network-diagnostics/network-check-target-xd92c","openshift-ovn-kubernetes/ovnkube-node-hspfz"] Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.948650 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.948732 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:18 crc kubenswrapper[4698]: E1014 09:57:18.948812 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.948941 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:18 crc kubenswrapper[4698]: E1014 09:57:18.949082 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949167 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:18 crc kubenswrapper[4698]: E1014 09:57:18.949278 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949242 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949320 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949853 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.949929 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b7cbk" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.950071 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.950655 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.953652 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.953974 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954001 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954002 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954185 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954245 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954298 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954349 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954370 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954415 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954427 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.954963 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955789 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955899 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955939 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955945 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955957 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.955956 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956020 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956050 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956085 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956297 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956624 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956905 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956928 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956940 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956908 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956984 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956983 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.956904 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.957110 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.959232 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.959261 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.960327 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.960600 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.968663 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:18 crc kubenswrapper[4698]: I1014 09:57:18.986644 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.014498 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.028547 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.042936 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.043935 4698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.057434 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068716 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068777 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068803 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068825 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068897 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068918 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.068982 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069004 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069026 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069047 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069069 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069091 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069133 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069141 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069159 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069162 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069189 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069256 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069281 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069328 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069328 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069344 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069375 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069392 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069372 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069415 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069438 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069546 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069557 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069571 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069593 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069593 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069599 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069609 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069619 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069625 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069646 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069671 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069694 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069716 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069735 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069759 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069807 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069831 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069875 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069899 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069943 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069989 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070055 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070103 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070124 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070146 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070166 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070188 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070242 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070285 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070337 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070380 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070404 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070468 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070491 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070536 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070580 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070606 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070628 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070653 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070675 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070836 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070861 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069777 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069829 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069857 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069886 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.069953 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070010 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070134 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070256 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070258 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070291 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070299 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070316 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070366 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070405 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070445 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070486 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070496 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070504 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071136 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070507 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070605 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070719 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070735 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070784 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070790 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070814 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070825 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071211 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.070987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071282 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071295 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071333 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071359 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071384 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071428 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071454 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071486 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071538 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071560 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071585 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071613 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071637 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071658 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071686 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071712 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071734 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071794 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071815 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071835 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071856 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071877 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071907 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071927 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071947 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071990 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072054 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072097 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072121 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072141 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072185 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072208 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072234 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072255 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072279 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071294 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071411 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071503 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071523 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071556 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071589 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071600 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071660 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071657 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071675 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.071860 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072073 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072123 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072265 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072307 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072462 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072715 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072843 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073046 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.072378 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073161 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073179 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073209 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073236 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073256 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073273 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073291 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073345 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073366 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073371 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073416 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073488 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073500 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073627 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073648 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073712 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073737 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.073752 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:19.573724418 +0000 UTC m=+21.271023914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073793 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074261 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074310 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074335 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074359 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074381 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074432 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074455 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074477 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074499 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074523 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074675 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074726 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074754 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074820 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074848 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074871 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074898 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074943 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074966 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074994 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075018 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075064 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075089 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075112 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075136 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075165 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075214 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075237 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075258 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075382 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075412 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075436 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075459 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075509 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075536 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075558 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075584 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075604 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075625 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075654 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075676 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075698 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075732 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075822 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075847 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075898 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076013 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c359a8fc-1e2f-49af-8da2-719d52bd969a-proxy-tls\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076050 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c359a8fc-1e2f-49af-8da2-719d52bd969a-rootfs\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076287 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076318 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076392 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076419 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e00aa977-8736-4b4d-8d58-c3d13879c49a-serviceca\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076467 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-os-release\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-socket-dir-parent\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-kubelet\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076732 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076797 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-hostroot\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076833 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-multus\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-conf-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c359a8fc-1e2f-49af-8da2-719d52bd969a-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077077 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lxj\" (UniqueName: \"kubernetes.io/projected/e00aa977-8736-4b4d-8d58-c3d13879c49a-kube-api-access-c9lxj\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077172 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkghj\" (UniqueName: \"kubernetes.io/projected/fbf10bbc-318d-4f46-83a0-fdbad9888201-kube-api-access-tkghj\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077203 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077251 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-hosts-file\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sjn\" (UniqueName: \"kubernetes.io/projected/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-kube-api-access-95sjn\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077319 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077368 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-system-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077581 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077605 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077623 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-cni-binary-copy\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077642 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-os-release\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077669 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8x7t\" (UniqueName: \"kubernetes.io/projected/5e9f983f-10a0-43b7-8590-346577a561ef-kube-api-access-h8x7t\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077694 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-cnibin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091818 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-bin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-etc-kubernetes\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-system-cni-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091930 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091976 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092035 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-netns\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092208 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00aa977-8736-4b4d-8d58-c3d13879c49a-host\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-cnibin\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092263 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092286 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlwb\" (UniqueName: \"kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-daemon-config\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-multus-certs\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093561 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093588 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzl92\" (UniqueName: \"kubernetes.io/projected/c359a8fc-1e2f-49af-8da2-719d52bd969a-kube-api-access-lzl92\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093615 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093639 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093666 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-k8s-cni-cncf-io\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093797 4698 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093815 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093830 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093844 4698 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093860 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093873 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093886 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093900 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093913 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093927 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093942 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093958 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093972 4698 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093985 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093998 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.094011 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.094024 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.094038 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.094053 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.094068 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095314 4698 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095353 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095369 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095383 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095397 4698 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095418 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095435 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.073850 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074145 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074327 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074626 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.074721 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075007 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075020 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075283 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075313 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075395 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075472 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.100017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075467 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075525 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.075895 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076018 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076509 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076853 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.076931 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077301 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077328 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077536 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077788 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.079905 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.080208 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.080568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.077853 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.081219 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.081362 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.081381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.081468 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.081473 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.083470 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.084021 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.084280 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.084520 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.084599 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.084712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.085013 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.085308 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086237 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086391 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086646 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086799 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086897 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086899 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086923 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.086957 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.087269 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.087392 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.087441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.087464 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.087755 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.088163 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.088753 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.088977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.089016 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.089509 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091286 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.091423 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092786 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093108 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093221 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093299 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.093434 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.092691 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095156 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095503 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095586 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.095595 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.096078 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.096152 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.096441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.096696 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.097497 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.098012 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.098293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.098514 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.098794 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.099162 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.099263 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.099312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.100202 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.101022 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:19.600999161 +0000 UTC m=+21.298298577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101406 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.101581 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:19.601572347 +0000 UTC m=+21.298871763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.100877 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101665 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101683 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101693 4698 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101704 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101742 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101751 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101774 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101782 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101791 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101800 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101810 4698 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101819 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101829 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101837 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101846 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101855 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101869 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101878 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101889 4698 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101900 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101959 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.101989 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102001 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102016 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102029 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102041 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102052 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102063 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102075 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102085 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102097 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102106 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102743 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102776 4698 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.102051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.103230 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.103868 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.106133 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.106530 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.107737 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.107967 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108297 4698 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108457 4698 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108660 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108749 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.108937 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.109176 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.109682 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.109959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.110182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.110642 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.110946 4698 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.110972 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111009 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111022 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111032 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111044 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111054 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111087 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111099 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111095 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111110 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111125 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111159 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.111979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112122 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112664 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112782 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112839 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112892 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.112918 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.113621 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.114658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.115353 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.122058 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.122978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.125594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.125943 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.126353 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.126740 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.126804 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.126832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.135270 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.135318 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.135334 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.135457 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:19.635408329 +0000 UTC m=+21.332707755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.136049 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.143956 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.144442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.147344 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.147875 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.149101 4698 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e" exitCode=255 Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.149137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e"} Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.149179 4698 scope.go:117] "RemoveContainer" containerID="516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.149636 4698 scope.go:117] "RemoveContainer" containerID="a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.149815 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.151283 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.152191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.152550 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.152669 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.158939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.160050 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.160147 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.160208 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.160335 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:19.660295093 +0000 UTC m=+21.357594509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.163674 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.167548 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.168464 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.176489 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.176788 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.187088 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.202136 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212246 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-socket-dir-parent\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212349 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-kubelet\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-hostroot\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212413 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212434 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212499 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-multus\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-conf-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212555 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c359a8fc-1e2f-49af-8da2-719d52bd969a-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212606 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9lxj\" (UniqueName: \"kubernetes.io/projected/e00aa977-8736-4b4d-8d58-c3d13879c49a-kube-api-access-c9lxj\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sjn\" (UniqueName: \"kubernetes.io/projected/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-kube-api-access-95sjn\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212647 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkghj\" (UniqueName: \"kubernetes.io/projected/fbf10bbc-318d-4f46-83a0-fdbad9888201-kube-api-access-tkghj\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212700 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212728 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-hosts-file\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212749 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212790 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-system-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212840 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212866 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-cni-binary-copy\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-etc-kubernetes\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-system-cni-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-os-release\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.212982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8x7t\" (UniqueName: \"kubernetes.io/projected/5e9f983f-10a0-43b7-8590-346577a561ef-kube-api-access-h8x7t\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213004 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-cnibin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213023 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-bin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213063 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213120 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-netns\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213165 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213197 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjlwb\" (UniqueName: \"kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00aa977-8736-4b4d-8d58-c3d13879c49a-host\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213240 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-cnibin\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213327 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-k8s-cni-cncf-io\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213347 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-daemon-config\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-multus-certs\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzl92\" (UniqueName: \"kubernetes.io/projected/c359a8fc-1e2f-49af-8da2-719d52bd969a-kube-api-access-lzl92\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213459 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c359a8fc-1e2f-49af-8da2-719d52bd969a-proxy-tls\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c359a8fc-1e2f-49af-8da2-719d52bd969a-rootfs\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213555 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e00aa977-8736-4b4d-8d58-c3d13879c49a-serviceca\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213598 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-os-release\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213656 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213672 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213685 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213698 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213710 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213723 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213736 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213748 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213760 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213791 4698 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213803 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213815 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213827 4698 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213839 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213850 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213862 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213874 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213886 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213898 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213909 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213921 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213933 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213945 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213958 4698 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213969 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213982 4698 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.213994 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214006 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214018 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214030 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214042 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214053 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214064 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214076 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214088 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214099 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214113 4698 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214125 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214136 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214150 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214162 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214173 4698 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214183 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214195 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214209 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214221 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214233 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214244 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214256 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214279 4698 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214292 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214305 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214316 4698 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214328 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214339 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214351 4698 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214366 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214378 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214389 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214402 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214415 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214426 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214438 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214450 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214461 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214475 4698 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214485 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214496 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214507 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214520 4698 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214531 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214542 4698 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214553 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214563 4698 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214575 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214586 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214597 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214623 4698 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214636 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214648 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214659 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214670 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214680 4698 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214691 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214702 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214715 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214728 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214739 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214750 4698 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214801 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214817 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214829 4698 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214841 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-system-cni-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.214979 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.215755 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.215806 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.215866 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-socket-dir-parent\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.215901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-kubelet\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216030 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216065 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-hostroot\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216087 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00aa977-8736-4b4d-8d58-c3d13879c49a-host\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216098 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216122 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-cnibin\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216173 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e00aa977-8736-4b4d-8d58-c3d13879c49a-serviceca\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-os-release\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216226 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216247 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-multus\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-conf-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216863 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c359a8fc-1e2f-49af-8da2-719d52bd969a-mcd-auth-proxy-config\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.216903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217317 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-hosts-file\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-system-cni-dir\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217647 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217664 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.217682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-cni-binary-copy\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218334 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-etc-kubernetes\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218384 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-k8s-cni-cncf-io\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218428 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-os-release\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-cnibin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.218869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.219671 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e9f983f-10a0-43b7-8590-346577a561ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.219733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-multus-certs\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.219793 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.219827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.219877 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-var-lib-cni-bin\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c359a8fc-1e2f-49af-8da2-719d52bd969a-rootfs\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220238 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbf10bbc-318d-4f46-83a0-fdbad9888201-host-run-netns\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220540 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220556 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220565 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220576 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220586 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220596 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220605 4698 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220614 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220623 4698 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220632 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220642 4698 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220650 4698 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220659 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220668 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220679 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220688 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220698 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220707 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220716 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220728 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220743 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220753 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220775 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220786 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.220795 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.221748 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.221906 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fbf10bbc-318d-4f46-83a0-fdbad9888201-multus-daemon-config\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.222005 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e9f983f-10a0-43b7-8590-346577a561ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.222065 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.222384 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.223901 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c359a8fc-1e2f-49af-8da2-719d52bd969a-proxy-tls\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.226549 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.234909 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.238165 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sjn\" (UniqueName: \"kubernetes.io/projected/b3d7ebe7-24ac-4bb6-be80-db147dc1c604-kube-api-access-95sjn\") pod \"node-resolver-8rj7q\" (UID: \"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\") " pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.248531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzl92\" (UniqueName: \"kubernetes.io/projected/c359a8fc-1e2f-49af-8da2-719d52bd969a-kube-api-access-lzl92\") pod \"machine-config-daemon-lp4sk\" (UID: \"c359a8fc-1e2f-49af-8da2-719d52bd969a\") " pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.249467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjlwb\" (UniqueName: \"kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb\") pod \"ovnkube-node-hspfz\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.250127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9lxj\" (UniqueName: \"kubernetes.io/projected/e00aa977-8736-4b4d-8d58-c3d13879c49a-kube-api-access-c9lxj\") pod \"node-ca-pfxrp\" (UID: \"e00aa977-8736-4b4d-8d58-c3d13879c49a\") " pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.250422 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkghj\" (UniqueName: \"kubernetes.io/projected/fbf10bbc-318d-4f46-83a0-fdbad9888201-kube-api-access-tkghj\") pod \"multus-b7cbk\" (UID: \"fbf10bbc-318d-4f46-83a0-fdbad9888201\") " pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.253177 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8x7t\" (UniqueName: \"kubernetes.io/projected/5e9f983f-10a0-43b7-8590-346577a561ef-kube-api-access-h8x7t\") pod \"multus-additional-cni-plugins-5twvn\" (UID: \"5e9f983f-10a0-43b7-8590-346577a561ef\") " pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.254158 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.261881 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.265099 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.266867 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.273559 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pfxrp" Oct 14 09:57:19 crc kubenswrapper[4698]: W1014 09:57:19.273952 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f6137556875c0bead5a2d501c9b49820c9a2ce40c3f12ee78b255634dcd0cd4c WatchSource:0}: Error finding container f6137556875c0bead5a2d501c9b49820c9a2ce40c3f12ee78b255634dcd0cd4c: Status 404 returned error can't find the container with id f6137556875c0bead5a2d501c9b49820c9a2ce40c3f12ee78b255634dcd0cd4c Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.275785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: W1014 09:57:19.281556 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-905e19baad0c8434c9eafc70ebf9e5f1f55fdf092f5dcd8a60f57b9ceec4fc67 WatchSource:0}: Error finding container 905e19baad0c8434c9eafc70ebf9e5f1f55fdf092f5dcd8a60f57b9ceec4fc67: Status 404 returned error can't find the container with id 905e19baad0c8434c9eafc70ebf9e5f1f55fdf092f5dcd8a60f57b9ceec4fc67 Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.284830 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8rj7q" Oct 14 09:57:19 crc kubenswrapper[4698]: W1014 09:57:19.285371 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode00aa977_8736_4b4d_8d58_c3d13879c49a.slice/crio-b6646e19462ae384ce0828e7d67a6310c7fe4fc6f6f1c53a47ce6617880af282 WatchSource:0}: Error finding container b6646e19462ae384ce0828e7d67a6310c7fe4fc6f6f1c53a47ce6617880af282: Status 404 returned error can't find the container with id b6646e19462ae384ce0828e7d67a6310c7fe4fc6f6f1c53a47ce6617880af282 Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.292003 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.296915 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.301570 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.307539 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5twvn" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.312271 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.314116 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b7cbk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.321477 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.326686 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.332489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.337182 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.347366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.358377 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: W1014 09:57:19.360400 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc359a8fc_1e2f_49af_8da2_719d52bd969a.slice/crio-37bf68d12f7f22ae724ed807db7b751668b4c9427883d448045f5af520439c70 WatchSource:0}: Error finding container 37bf68d12f7f22ae724ed807db7b751668b4c9427883d448045f5af520439c70: Status 404 returned error can't find the container with id 37bf68d12f7f22ae724ed807db7b751668b4c9427883d448045f5af520439c70 Oct 14 09:57:19 crc kubenswrapper[4698]: W1014 09:57:19.368454 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd02f5359_81fc_4261_b995_e58c78bcec0e.slice/crio-c889fa6542bed3a81090fc56d086523fb2ad50d74105cfa09a0abf5ecfe4b185 WatchSource:0}: Error finding container c889fa6542bed3a81090fc56d086523fb2ad50d74105cfa09a0abf5ecfe4b185: Status 404 returned error can't find the container with id c889fa6542bed3a81090fc56d086523fb2ad50d74105cfa09a0abf5ecfe4b185 Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.376315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.389408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.398068 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.408829 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:11Z\\\",\\\"message\\\":\\\"W1014 09:57:01.327604 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 09:57:01.329252 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760435821 cert, and key in /tmp/serving-cert-654485267/serving-signer.crt, /tmp/serving-cert-654485267/serving-signer.key\\\\nI1014 09:57:01.620283 1 observer_polling.go:159] Starting file observer\\\\nW1014 09:57:01.626194 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 09:57:01.626447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:01.627624 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-654485267/tls.crt::/tmp/serving-cert-654485267/tls.key\\\\\\\"\\\\nF1014 09:57:11.960038 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.419230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.428342 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.438639 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.447960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.457384 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.465020 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.476411 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:11Z\\\",\\\"message\\\":\\\"W1014 09:57:01.327604 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 09:57:01.329252 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760435821 cert, and key in /tmp/serving-cert-654485267/serving-signer.crt, /tmp/serving-cert-654485267/serving-signer.key\\\\nI1014 09:57:01.620283 1 observer_polling.go:159] Starting file observer\\\\nW1014 09:57:01.626194 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 09:57:01.626447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:01.627624 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-654485267/tls.crt::/tmp/serving-cert-654485267/tls.key\\\\\\\"\\\\nF1014 09:57:11.960038 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.486195 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.496421 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.506817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.515542 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.526483 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.541785 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.624440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.624585 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.624637 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.624821 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.624845 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:20.624818408 +0000 UTC m=+22.322117824 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.624887 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.624906 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:20.62488548 +0000 UTC m=+22.322184976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.624999 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:20.624977123 +0000 UTC m=+22.322276539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.725857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:19 crc kubenswrapper[4698]: I1014 09:57:19.725957 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726112 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726131 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726131 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726365 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726378 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726432 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:20.726414015 +0000 UTC m=+22.423713431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726144 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:19 crc kubenswrapper[4698]: E1014 09:57:19.726507 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:20.726487807 +0000 UTC m=+22.423787223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.154346 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerStarted","Data":"4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.154441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerStarted","Data":"456d85d509c52813e3b323ef19d5650f229bbe848f439ed21f085619f372f505"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.156477 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.156521 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.156541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"905e19baad0c8434c9eafc70ebf9e5f1f55fdf092f5dcd8a60f57b9ceec4fc67"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.159035 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.161918 4698 scope.go:117] "RemoveContainer" containerID="a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e" Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.162179 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.162732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.162811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.162830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"37bf68d12f7f22ae724ed807db7b751668b4c9427883d448045f5af520439c70"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.164290 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753" exitCode=0 Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.164373 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.164405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerStarted","Data":"ecebfba02a1a3313be021ab98e5f32bcc9868536dcb3993838d2b035956a5e5d"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.165409 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pfxrp" event={"ID":"e00aa977-8736-4b4d-8d58-c3d13879c49a","Type":"ContainerStarted","Data":"4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.165440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pfxrp" event={"ID":"e00aa977-8736-4b4d-8d58-c3d13879c49a","Type":"ContainerStarted","Data":"b6646e19462ae384ce0828e7d67a6310c7fe4fc6f6f1c53a47ce6617880af282"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.166914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.166946 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d81a7a14a3d7ff03fe3a47d14bb4ed9811df7ec42e64e5f51833529da7f0b3eb"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.168665 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8rj7q" event={"ID":"b3d7ebe7-24ac-4bb6-be80-db147dc1c604","Type":"ContainerStarted","Data":"5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.168726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8rj7q" event={"ID":"b3d7ebe7-24ac-4bb6-be80-db147dc1c604","Type":"ContainerStarted","Data":"12165b2f200e99ab05ea999a01b79c244665859278704188265c2a690d866b13"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.174022 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" exitCode=0 Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.174114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.174147 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"c889fa6542bed3a81090fc56d086523fb2ad50d74105cfa09a0abf5ecfe4b185"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.174147 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.176468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f6137556875c0bead5a2d501c9b49820c9a2ce40c3f12ee78b255634dcd0cd4c"} Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.197296 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.218544 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.233221 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.244050 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.259129 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.271324 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.289481 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.306532 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.333284 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516fd9f00404acd0654586207f87d6ee2b5b9e1b655f7ba11582034da0295950\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:11Z\\\",\\\"message\\\":\\\"W1014 09:57:01.327604 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 09:57:01.329252 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760435821 cert, and key in /tmp/serving-cert-654485267/serving-signer.crt, /tmp/serving-cert-654485267/serving-signer.key\\\\nI1014 09:57:01.620283 1 observer_polling.go:159] Starting file observer\\\\nW1014 09:57:01.626194 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 09:57:01.626447 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:01.627624 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-654485267/tls.crt::/tmp/serving-cert-654485267/tls.key\\\\\\\"\\\\nF1014 09:57:11.960038 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.353001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.369132 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.382597 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.395707 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.408544 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.413209 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.420059 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.432861 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.445428 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.460991 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.475354 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.564860 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.581001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.593991 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.612043 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.624688 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.637546 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.639908 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.639995 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:22.639975679 +0000 UTC m=+24.337275095 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.640053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.640094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.640165 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.640190 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.640204 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:22.640197455 +0000 UTC m=+24.337496871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.640223 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:22.640215536 +0000 UTC m=+24.337514952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.653174 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.670197 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.684257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.705801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.721357 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.736518 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.740656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.740719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740846 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740861 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740871 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740896 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740925 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:22.740909457 +0000 UTC m=+24.438208873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740928 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.740949 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:20 crc kubenswrapper[4698]: E1014 09:57:20.741011 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:22.740986709 +0000 UTC m=+24.438286185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.750864 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.768872 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.814060 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.852286 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.891368 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.927860 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:20 crc kubenswrapper[4698]: I1014 09:57:20.972470 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:20Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.015331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.016437 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.016438 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.016548 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.016592 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.016672 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.016729 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.022974 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.023650 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.024468 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.025332 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.025977 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.026576 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.028317 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.028892 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.029607 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.030124 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.030739 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.031495 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.033278 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.033792 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.034329 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.035383 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.036010 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.036427 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.037386 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.037972 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.038468 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.039531 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.040019 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.041120 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.041858 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.042424 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.043549 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.044311 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.045403 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.046111 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.046672 4698 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.046791 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.048709 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.049804 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.050334 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.052396 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.053543 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.054147 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.055150 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.055874 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.056713 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.057318 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.058310 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.059380 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.059921 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.060779 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.061333 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.062416 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.063159 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.063750 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.064647 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.065275 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.066343 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.066820 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.068001 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.089138 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.184798 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.184869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.184882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.184893 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.188029 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9" exitCode=0 Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.188668 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9"} Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.204901 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.226310 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.258471 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.271426 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.294745 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.329501 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.374735 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.408475 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.454458 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.491420 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.529542 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.570200 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.609958 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.648344 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.792135 4698 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.793927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.793978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.793995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.794125 4698 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.800883 4698 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.801150 4698 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.802393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.802446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.802462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.802496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.802515 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.821939 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.825653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.825710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.825727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.825754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.825808 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.849842 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.853501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.853528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.853552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.853567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.853576 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.874437 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.878838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.878898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.878920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.878950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.878974 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.898813 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.903085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.903125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.903138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.903157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.903169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.919596 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:21Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:21 crc kubenswrapper[4698]: E1014 09:57:21.919779 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.921600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.921634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.921645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.921659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:21 crc kubenswrapper[4698]: I1014 09:57:21.921670 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:21Z","lastTransitionTime":"2025-10-14T09:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.024053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.024151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.024184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.024217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.024241 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.127331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.127373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.127385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.127402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.127414 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.193696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.196284 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398" exitCode=0 Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.196330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.202370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.202422 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.215738 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.230796 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.230857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.230916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.230946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.230970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.243626 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.266508 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.284555 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.306403 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.326473 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.333129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.333166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.333177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.333196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.333208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.338669 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.352573 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.368729 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.388594 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.406959 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.421041 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.433728 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.436125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.436155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.436163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.436179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.436189 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.443706 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.456520 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.468330 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.481322 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.502866 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.520491 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.535851 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.538859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.538903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.538913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.538931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.538940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.554891 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.577360 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.616674 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.641491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.641538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.641551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.641571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.641583 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.653531 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.662566 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.662715 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:26.662689675 +0000 UTC m=+28.359989101 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.662842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.662918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.662933 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.662982 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:26.662970683 +0000 UTC m=+28.360270129 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.663118 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.663216 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:26.663193339 +0000 UTC m=+28.360492765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.695032 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.738274 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.744468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.744513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.744531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.744555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.744573 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.764339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.764422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.764678 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.764712 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.764729 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.764829 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:26.764807496 +0000 UTC m=+28.462106912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.765341 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.765354 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.765364 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:22 crc kubenswrapper[4698]: E1014 09:57:22.765399 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:26.765390133 +0000 UTC m=+28.462689549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.776434 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.811205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:22Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.847030 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.847083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.847097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.847114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.847126 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.950018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.950055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.950063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.950077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:22 crc kubenswrapper[4698]: I1014 09:57:22.950086 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:22Z","lastTransitionTime":"2025-10-14T09:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.016357 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.016404 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:23 crc kubenswrapper[4698]: E1014 09:57:23.016543 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.016704 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:23 crc kubenswrapper[4698]: E1014 09:57:23.016876 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:23 crc kubenswrapper[4698]: E1014 09:57:23.017057 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.053039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.053099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.053116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.053139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.053156 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.156630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.156670 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.156678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.156692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.156702 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.208749 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a" exitCode=0 Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.208819 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.235854 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.254979 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.259685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.259738 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.259759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.259819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.259841 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.270477 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.284552 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.299115 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.310955 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.328343 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.352815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.362513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.362538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.362545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.362558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.362566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.365970 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.377168 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.390008 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.405087 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.419488 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.432697 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.464407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.464454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.464477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.464496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.464508 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.567637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.567696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.567713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.567740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.567759 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.671211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.671261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.671274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.671291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.671303 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.774889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.774952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.774973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.774998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.775016 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.877668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.877714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.877726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.877743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.877754 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.980416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.980463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.980479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.980500 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:23 crc kubenswrapper[4698]: I1014 09:57:23.980517 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:23Z","lastTransitionTime":"2025-10-14T09:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.083695 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.084059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.084163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.084255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.084362 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.187750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.188386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.188643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.188906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.189106 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.217261 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca" exitCode=0 Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.217318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.225909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.247575 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.268241 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.286621 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.291657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.291698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.291709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.291728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.291740 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.307267 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.323404 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.334893 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.350397 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.365027 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.382488 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.396293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.396327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.396338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.396357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.396369 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.399358 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.410668 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.422087 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.435877 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.453781 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:24Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.498626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.498655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.498664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.498676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.498686 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.602395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.602453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.602469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.602492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.602511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.705554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.705622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.705638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.705662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.705679 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.809069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.809131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.809148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.809175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.809192 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.912457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.912511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.912525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.912544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:24 crc kubenswrapper[4698]: I1014 09:57:24.912559 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:24Z","lastTransitionTime":"2025-10-14T09:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.015964 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016004 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:25 crc kubenswrapper[4698]: E1014 09:57:25.016155 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016443 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: E1014 09:57:25.016434 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.016891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:25 crc kubenswrapper[4698]: E1014 09:57:25.017277 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.119039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.119081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.119092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.119108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.119119 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.221885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.221950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.221967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.221992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.222009 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.234437 4698 generic.go:334] "Generic (PLEG): container finished" podID="5e9f983f-10a0-43b7-8590-346577a561ef" containerID="5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6" exitCode=0 Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.234494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerDied","Data":"5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.256802 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.273753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.288002 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.304654 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.313803 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.323740 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.324091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.324174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.324237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.324303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.324363 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.339599 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.355493 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.369970 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.382228 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.398537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.415300 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.428489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.428537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.428554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.428579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.428596 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.429966 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.440817 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:25Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.530564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.530602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.530613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.530628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.530649 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.633481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.633532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.633548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.633569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.633582 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.735930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.735976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.735991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.736010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.736026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.838893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.838944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.838962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.838989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.839006 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.941100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.941176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.941193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.941213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:25 crc kubenswrapper[4698]: I1014 09:57:25.941227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:25Z","lastTransitionTime":"2025-10-14T09:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.044790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.044854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.044873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.044899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.044916 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.147402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.147441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.147453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.147470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.147483 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.242020 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.242373 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.246196 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" event={"ID":"5e9f983f-10a0-43b7-8590-346577a561ef","Type":"ContainerStarted","Data":"03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.249091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.249128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.249143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.249162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.249179 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.256600 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.271207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.274517 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.294973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.312931 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.325070 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.342891 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.351865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.351925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.351938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.351957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.351970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.360162 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.373670 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.385968 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.401295 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.413387 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.426266 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.442683 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.453911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.453969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.453987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.454011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.454031 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.464495 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.476792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.493009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.512744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.535277 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.551935 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.560240 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.560529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.560755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.561000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.561205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.571192 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.586670 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.602374 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.614379 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.628516 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.641486 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.655176 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.663495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.663531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.663541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.663558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.663568 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.666343 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.675715 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:26Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.711217 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.711320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.711346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.711466 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.711518 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:34.711502493 +0000 UTC m=+36.408801909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.711553 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.711609 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:34.711597805 +0000 UTC m=+36.408897221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.711754 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:34.711726189 +0000 UTC m=+36.409025605 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.765801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.765853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.765866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.765884 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.765896 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.811879 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.812166 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.812362 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.812417 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.812304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.812492 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:34.812466771 +0000 UTC m=+36.509766227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.813068 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.813117 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.813132 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:26 crc kubenswrapper[4698]: E1014 09:57:26.813215 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:34.813190312 +0000 UTC m=+36.510489828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.867955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.868138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.868164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.868187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.868202 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.971234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.971292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.971308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.971330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:26 crc kubenswrapper[4698]: I1014 09:57:26.971345 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:26Z","lastTransitionTime":"2025-10-14T09:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.016160 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.016204 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.016370 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:27 crc kubenswrapper[4698]: E1014 09:57:27.016357 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:27 crc kubenswrapper[4698]: E1014 09:57:27.016530 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:27 crc kubenswrapper[4698]: E1014 09:57:27.016692 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.074052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.074108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.074122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.074144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.074162 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.176869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.176928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.176945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.176971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.176989 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.249337 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.249795 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279328 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.279403 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.292883 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.306932 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.338739 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.358137 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.372941 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.381971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.382003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.382012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.382026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.382035 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.392778 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.404596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.418662 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.429962 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.443465 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.456734 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.470792 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484167 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.484977 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.496841 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:27Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.587222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.587290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.587338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.587358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.587370 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.690566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.690601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.690611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.690630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.690640 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.792928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.792988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.792997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.793033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.793044 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.895117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.895149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.895159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.895175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.895185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.997463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.997490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.997497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.997519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:27 crc kubenswrapper[4698]: I1014 09:57:27.997527 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:27Z","lastTransitionTime":"2025-10-14T09:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.100957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.101004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.101012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.101028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.101036 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.203195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.203234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.203243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.203259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.203270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.252025 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.305663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.305698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.305706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.305720 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.305730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.407447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.407497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.407515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.407658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.407677 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.509439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.509480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.509490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.509506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.509517 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.611516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.611568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.611579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.611596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.611607 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.714191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.714283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.714320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.714353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.714375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.817524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.817591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.817604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.817622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.817635 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.920737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.920824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.920836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.920854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:28 crc kubenswrapper[4698]: I1014 09:57:28.920865 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:28Z","lastTransitionTime":"2025-10-14T09:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.016804 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.016879 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.016835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:29 crc kubenswrapper[4698]: E1014 09:57:29.017044 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:29 crc kubenswrapper[4698]: E1014 09:57:29.017165 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:29 crc kubenswrapper[4698]: E1014 09:57:29.017230 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.023150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.023184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.023192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.023217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.023226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.034947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.062121 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.070459 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.071176 4698 scope.go:117] "RemoveContainer" containerID="a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.076505 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.100579 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.117233 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.126361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.126410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.126425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.126446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.126461 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.131276 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.145012 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.161956 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.181195 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.194833 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.209123 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.223058 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.228502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.228541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.228552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.228572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.228583 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.233577 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.245166 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.260653 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/0.log" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.263712 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c" exitCode=1 Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.263785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.264498 4698 scope.go:117] "RemoveContainer" containerID="ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.278011 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.291744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.304577 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.315953 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.328753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.330281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.330304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.330312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.330324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.330333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.340218 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.356272 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.377137 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:28Z\\\",\\\"message\\\":\\\"nt handler 1 for removal\\\\nI1014 09:57:28.514882 5983 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 09:57:28.514945 5983 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:28.514955 5983 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514898 5983 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514936 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:28.515192 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:28.515226 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:28.515276 5983 factory.go:656] Stopping watch factory\\\\nI1014 09:57:28.515302 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:28.515312 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:28.515324 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:28.515332 5983 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.396057 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.409303 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.420845 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.432596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.445609 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.458196 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.543319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.543356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.543364 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.543377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.543385 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.645263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.645301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.645309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.645325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.645335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.748085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.748120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.748129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.748141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.748151 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.850577 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.850809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.850905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.850972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.851026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.953328 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.953575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.953636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.953694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:29 crc kubenswrapper[4698]: I1014 09:57:29.953751 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:29Z","lastTransitionTime":"2025-10-14T09:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.055843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.056059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.056129 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.056224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.056302 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.159936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.159995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.160011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.160032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.160047 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.262705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.262799 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.262819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.262843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.262861 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.269557 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.272102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.272499 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.274091 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/1.log" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.274800 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/0.log" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.277673 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac" exitCode=1 Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.277732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.277824 4698 scope.go:117] "RemoveContainer" containerID="ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.278279 4698 scope.go:117] "RemoveContainer" containerID="afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac" Oct 14 09:57:30 crc kubenswrapper[4698]: E1014 09:57:30.278433 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.288681 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.314212 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.342139 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:28Z\\\",\\\"message\\\":\\\"nt handler 1 for removal\\\\nI1014 09:57:28.514882 5983 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 09:57:28.514945 5983 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:28.514955 5983 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514898 5983 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514936 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:28.515192 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:28.515226 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:28.515276 5983 factory.go:656] Stopping watch factory\\\\nI1014 09:57:28.515302 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:28.515312 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:28.515324 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:28.515332 5983 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.357442 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.365124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.365165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.365176 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.365192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.365205 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.372133 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.391626 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.403527 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.421696 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.436943 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.452671 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.467791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.467934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.468033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.468131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.468220 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.470759 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.489593 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.503153 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.514136 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.540744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee38f8e0f28cbfd68b8333c30570c516ec21dce2c861b0b210576e305473435c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:28Z\\\",\\\"message\\\":\\\"nt handler 1 for removal\\\\nI1014 09:57:28.514882 5983 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 09:57:28.514945 5983 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:28.514955 5983 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514898 5983 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:28.514936 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:28.515192 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:28.515226 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:28.515276 5983 factory.go:656] Stopping watch factory\\\\nI1014 09:57:28.515302 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:28.515312 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:28.515324 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:28.515332 5983 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.555250 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.571077 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.571154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.571167 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.571186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.571201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.577425 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.598133 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.616517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.630748 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.645831 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.660810 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673508 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673726 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.673821 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.685575 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.703500 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.716052 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.725587 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.732780 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:30Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.776054 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.776081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.776091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.776107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.776118 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.879136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.879218 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.879242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.879269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.879292 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.981988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.982051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.982074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.982104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:30 crc kubenswrapper[4698]: I1014 09:57:30.982126 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:30Z","lastTransitionTime":"2025-10-14T09:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.015945 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.016018 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.016051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:31 crc kubenswrapper[4698]: E1014 09:57:31.016163 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:31 crc kubenswrapper[4698]: E1014 09:57:31.016276 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:31 crc kubenswrapper[4698]: E1014 09:57:31.016447 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.085391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.085976 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.086021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.086044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.086057 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.188636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.188691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.188709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.188734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.188751 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.284442 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/1.log" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.289936 4698 scope.go:117] "RemoveContainer" containerID="afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac" Oct 14 09:57:31 crc kubenswrapper[4698]: E1014 09:57:31.290289 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.290572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.290638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.290650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.290665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.290679 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.310889 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.331366 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.347874 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.359926 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7"] Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.360345 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.364513 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.366254 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.370954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.384107 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.393472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.393618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.393700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.393810 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.393907 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.401499 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.415205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.431949 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.446942 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.460630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.460680 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.460730 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtc5t\" (UniqueName: \"kubernetes.io/projected/dba065f3-2084-442a-9f77-a1dfb007aa0a-kube-api-access-wtc5t\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.460973 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.465714 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.478408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.495932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.495980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.495991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.496012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.496023 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.499595 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.510686 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.528035 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.540012 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.561926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.562016 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.562071 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.562173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtc5t\" (UniqueName: \"kubernetes.io/projected/dba065f3-2084-442a-9f77-a1dfb007aa0a-kube-api-access-wtc5t\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.563102 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.563398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.564699 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.573884 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dba065f3-2084-442a-9f77-a1dfb007aa0a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.584112 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtc5t\" (UniqueName: \"kubernetes.io/projected/dba065f3-2084-442a-9f77-a1dfb007aa0a-kube-api-access-wtc5t\") pod \"ovnkube-control-plane-749d76644c-ndfs7\" (UID: \"dba065f3-2084-442a-9f77-a1dfb007aa0a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.589801 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.598940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.598989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.599000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.599024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.599046 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.606897 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.626749 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.643181 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.657204 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.673172 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.679407 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.693505 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: W1014 09:57:31.701589 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddba065f3_2084_442a_9f77_a1dfb007aa0a.slice/crio-42d3174fd0efeac8c6c310ec81d2c3b7ae3c18bbdf150ba663288ff049b6393e WatchSource:0}: Error finding container 42d3174fd0efeac8c6c310ec81d2c3b7ae3c18bbdf150ba663288ff049b6393e: Status 404 returned error can't find the container with id 42d3174fd0efeac8c6c310ec81d2c3b7ae3c18bbdf150ba663288ff049b6393e Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.702062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.702118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.702133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.702159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.702173 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.716539 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.738214 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.773315 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.807373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.807606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.807826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.807935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.808043 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.818015 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.838011 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.847601 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:31Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.910576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.910618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.910627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.910642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:31 crc kubenswrapper[4698]: I1014 09:57:31.910652 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:31Z","lastTransitionTime":"2025-10-14T09:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.013535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.013576 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.013587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.013604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.013618 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.116745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.116808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.116821 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.116842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.116854 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.142799 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jbpnj"] Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.143307 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.143376 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.161423 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.167217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f52v\" (UniqueName: \"kubernetes.io/projected/41f5ac86-35f8-416c-bbfe-1e182975ec5c-kube-api-access-7f52v\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.167278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.178710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.178785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.178801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.178826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.178848 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.191010 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.209963 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.210806 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.213737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.213791 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.213807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.213829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.213844 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.222647 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.227296 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.231395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.231430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.231441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.231457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.231468 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.236946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.242428 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.245843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.245882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.245894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.246278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.246320 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.255225 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.263100 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.267927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268005 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f52v\" (UniqueName: \"kubernetes.io/projected/41f5ac86-35f8-416c-bbfe-1e182975ec5c-kube-api-access-7f52v\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.268198 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.268255 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:57:32.768236117 +0000 UTC m=+34.465535523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268454 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.268535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.270388 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.280885 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.281010 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.282814 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.283062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.283115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.283127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.283150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.283165 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.291289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f52v\" (UniqueName: \"kubernetes.io/projected/41f5ac86-35f8-416c-bbfe-1e182975ec5c-kube-api-access-7f52v\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.293550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" event={"ID":"dba065f3-2084-442a-9f77-a1dfb007aa0a","Type":"ContainerStarted","Data":"7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.293620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" event={"ID":"dba065f3-2084-442a-9f77-a1dfb007aa0a","Type":"ContainerStarted","Data":"41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.293646 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" event={"ID":"dba065f3-2084-442a-9f77-a1dfb007aa0a","Type":"ContainerStarted","Data":"42d3174fd0efeac8c6c310ec81d2c3b7ae3c18bbdf150ba663288ff049b6393e"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.299613 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.312946 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.325584 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.334996 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.357375 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.373280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.385330 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.386332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.386373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.386384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.386401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.386414 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.401990 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.416153 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.429787 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.442828 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.455166 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.466487 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.480949 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.488668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.488717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.488728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.488745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.488778 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.497437 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.516933 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.531218 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.548207 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.562625 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.577915 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.591308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.591353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.591365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.591383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.591396 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.597726 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.616466 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.630701 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.641718 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:32Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.694146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.694208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.694226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.694250 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.694267 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.773219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.773418 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:32 crc kubenswrapper[4698]: E1014 09:57:32.773490 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:57:33.773463781 +0000 UTC m=+35.470763227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.796584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.796667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.796689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.796714 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.796730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.900152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.900212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.900228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.900252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:32 crc kubenswrapper[4698]: I1014 09:57:32.900268 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:32Z","lastTransitionTime":"2025-10-14T09:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.003195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.003260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.003282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.003311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.003332 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.016735 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.016819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.016735 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:33 crc kubenswrapper[4698]: E1014 09:57:33.016969 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:33 crc kubenswrapper[4698]: E1014 09:57:33.017116 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:33 crc kubenswrapper[4698]: E1014 09:57:33.017290 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.106429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.106478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.106526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.106545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.106557 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.209571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.209630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.209644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.209661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.209675 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.312012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.312048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.312057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.312072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.312082 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.415052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.415103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.415114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.415130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.415138 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.517467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.517530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.517547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.517614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.517649 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.620817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.620870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.620887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.620909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.620926 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.724840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.724918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.724941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.724968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.724984 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.783383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:33 crc kubenswrapper[4698]: E1014 09:57:33.783621 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:33 crc kubenswrapper[4698]: E1014 09:57:33.783739 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:57:35.783707761 +0000 UTC m=+37.481007217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.828237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.828307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.828324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.828350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.828369 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.931122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.931186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.931202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.931227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:33 crc kubenswrapper[4698]: I1014 09:57:33.931245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:33Z","lastTransitionTime":"2025-10-14T09:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.016139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.016328 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.034429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.034501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.034525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.034557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.034580 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.138254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.138340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.138378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.138410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.138432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.241124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.241168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.241178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.241193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.241203 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.344621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.344685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.344697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.344715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.344725 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.447826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.447882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.447898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.447923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.447940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.551462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.551529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.551551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.551579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.551603 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.654182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.654282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.654300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.654324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.654342 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.757163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.757209 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.757223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.757242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.757255 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.795506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.795685 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:57:50.79565552 +0000 UTC m=+52.492954936 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.795985 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.796030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.796094 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.796176 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.796182 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:50.796156655 +0000 UTC m=+52.493456121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.796212 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:50.796205406 +0000 UTC m=+52.493504822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.859938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.859984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.859995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.860012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.860023 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.897034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.897113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897224 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897260 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897279 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897366 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:50.897337829 +0000 UTC m=+52.594637275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897394 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897448 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897467 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:34 crc kubenswrapper[4698]: E1014 09:57:34.897555 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:57:50.897525735 +0000 UTC m=+52.594825231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.962819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.962874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.962893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.962916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:34 crc kubenswrapper[4698]: I1014 09:57:34.962933 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:34Z","lastTransitionTime":"2025-10-14T09:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.016930 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.016995 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.017075 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:35 crc kubenswrapper[4698]: E1014 09:57:35.017183 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:35 crc kubenswrapper[4698]: E1014 09:57:35.017290 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:35 crc kubenswrapper[4698]: E1014 09:57:35.017374 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.065927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.066023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.066048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.066082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.066116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.169866 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.169941 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.169959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.169983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.170002 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.273348 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.273392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.273403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.273424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.273449 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.375725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.375824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.375848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.375878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.375899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.479219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.479291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.479314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.479344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.479366 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.582076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.582130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.582156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.582179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.582195 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.684281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.684363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.684381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.684881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.684939 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.788271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.788332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.788356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.788383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.788404 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.805034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:35 crc kubenswrapper[4698]: E1014 09:57:35.805243 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:35 crc kubenswrapper[4698]: E1014 09:57:35.805375 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:57:39.805335775 +0000 UTC m=+41.502635261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.891593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.891657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.891681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.891713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.891737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.993886 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.993950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.993971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.994000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:35 crc kubenswrapper[4698]: I1014 09:57:35.994021 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:35Z","lastTransitionTime":"2025-10-14T09:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.016732 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:36 crc kubenswrapper[4698]: E1014 09:57:36.017075 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.097284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.097575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.097639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.097717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.097802 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.200368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.200687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.200902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.201089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.201235 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.304549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.304587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.304597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.304616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.304629 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.407301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.407351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.407371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.407395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.407413 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.510481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.510537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.510554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.510575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.510595 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.614014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.614074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.614091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.614119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.614136 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.716959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.717004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.717020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.717041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.717056 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.819470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.819867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.820183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.820418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.820609 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.924281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.924619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.924755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.925104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:36 crc kubenswrapper[4698]: I1014 09:57:36.925240 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:36Z","lastTransitionTime":"2025-10-14T09:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.016647 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.016721 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.016754 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:37 crc kubenswrapper[4698]: E1014 09:57:37.016927 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:37 crc kubenswrapper[4698]: E1014 09:57:37.017049 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:37 crc kubenswrapper[4698]: E1014 09:57:37.017121 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.028108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.028168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.028187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.028211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.028231 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.131996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.132139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.132159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.132182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.132198 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.236079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.236148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.236171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.236204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.236226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.339496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.339562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.339584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.339616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.339637 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.443415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.443461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.443477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.443502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.443518 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.546101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.546165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.546188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.546216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.546238 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.648623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.648690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.648700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.648715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.648726 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.752970 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.753045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.753067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.753097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.753119 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.857341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.857418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.857439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.857472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.857493 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.959496 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.959553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.959569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.959590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:37 crc kubenswrapper[4698]: I1014 09:57:37.959606 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:37Z","lastTransitionTime":"2025-10-14T09:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.016747 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:38 crc kubenswrapper[4698]: E1014 09:57:38.017019 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.063015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.063073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.063087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.063108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.063122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.166497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.166561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.166585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.166617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.166639 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.271165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.271234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.271256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.271284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.271306 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.374403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.374462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.374484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.374509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.374527 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.478309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.478393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.478417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.478448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.478469 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.581979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.582053 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.582071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.582097 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.582117 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.685349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.685411 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.685428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.685452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.685470 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.788385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.788453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.788477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.788504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.788521 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.891409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.891490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.891503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.891530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.891546 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.994114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.994207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.994224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.994248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:38 crc kubenswrapper[4698]: I1014 09:57:38.994267 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:38Z","lastTransitionTime":"2025-10-14T09:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.016970 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:39 crc kubenswrapper[4698]: E1014 09:57:39.017206 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.016983 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.017329 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:39 crc kubenswrapper[4698]: E1014 09:57:39.017576 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:39 crc kubenswrapper[4698]: E1014 09:57:39.017751 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.040179 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.059906 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.083254 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.096404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.096569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.096598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.096629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.096651 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.103024 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.120897 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.144552 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.164845 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.184312 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.199244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.199337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.199365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.199401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.199432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.207332 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.233740 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.248437 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.265204 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.283040 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.303619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.303676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.303693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.303717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.303736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.304537 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.319220 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.329458 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:39Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.406701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.407002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.407010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.407022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.407032 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.509469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.509523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.509539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.509561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.509574 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.611965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.612039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.612057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.612084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.612103 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.714235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.714276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.714289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.714305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.714316 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.817318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.817386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.817409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.817435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.817452 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.851757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:39 crc kubenswrapper[4698]: E1014 09:57:39.852142 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:39 crc kubenswrapper[4698]: E1014 09:57:39.852326 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:57:47.85228394 +0000 UTC m=+49.549583416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.920631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.920709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.920731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.920809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:39 crc kubenswrapper[4698]: I1014 09:57:39.920852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:39Z","lastTransitionTime":"2025-10-14T09:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.016252 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:40 crc kubenswrapper[4698]: E1014 09:57:40.016456 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.023958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.024039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.024066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.024100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.024124 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.127352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.127418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.127435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.127460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.127482 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.230850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.230928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.230950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.230980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.231002 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.333818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.333910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.333930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.333961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.333979 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.437443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.437517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.437535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.437567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.437591 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.540829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.540909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.540932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.540959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.540979 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.644226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.644271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.644285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.644302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.644314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.750175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.750261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.750281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.750307 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.750333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.853272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.853312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.853321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.853335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.853345 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.956059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.956107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.956119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.956135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:40 crc kubenswrapper[4698]: I1014 09:57:40.956146 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:40Z","lastTransitionTime":"2025-10-14T09:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.016161 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.016157 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:41 crc kubenswrapper[4698]: E1014 09:57:41.016810 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.016200 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:41 crc kubenswrapper[4698]: E1014 09:57:41.016969 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:41 crc kubenswrapper[4698]: E1014 09:57:41.016619 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.058525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.058592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.058632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.058663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.058686 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.161574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.161644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.161669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.161694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.161712 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.265382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.265452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.265470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.265494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.265513 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.369210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.369276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.369299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.369330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.369358 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.472522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.472604 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.472627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.472660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.472685 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.575316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.575384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.575410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.575442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.575467 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.677989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.678069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.678085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.678105 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.678121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.780992 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.781033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.781045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.781062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.781074 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.884228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.884302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.884322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.884346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.884363 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.987226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.987296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.987315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.987342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:41 crc kubenswrapper[4698]: I1014 09:57:41.987360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:41Z","lastTransitionTime":"2025-10-14T09:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.016591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.016900 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.090711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.090836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.090857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.090885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.090904 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.194386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.194443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.194461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.194491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.194509 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.297837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.297904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.297923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.297945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.297963 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.392000 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.392060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.392086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.392117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.392140 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.412518 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:42Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.418239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.418291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.418312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.418336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.418353 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.438866 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:42Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.443685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.443804 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.443833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.443865 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.443885 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.464052 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:42Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.472969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.474065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.474111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.474151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.474169 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.494683 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:42Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.499828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.499890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.499913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.499942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.499964 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.519958 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:42Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:42 crc kubenswrapper[4698]: E1014 09:57:42.520197 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.522655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.522708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.522725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.522752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.522802 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.626230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.626287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.626309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.626336 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.626356 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.728867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.728937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.728956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.728983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.729001 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.832236 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.832297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.832314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.832341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.832360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.934974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.935038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.935055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.935083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:42 crc kubenswrapper[4698]: I1014 09:57:42.935101 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:42Z","lastTransitionTime":"2025-10-14T09:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.016033 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.016107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.016038 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:43 crc kubenswrapper[4698]: E1014 09:57:43.016217 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:43 crc kubenswrapper[4698]: E1014 09:57:43.016391 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:43 crc kubenswrapper[4698]: E1014 09:57:43.016583 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.037838 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.037912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.037932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.037958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.037976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.141149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.141205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.141223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.141247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.141263 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.244384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.244469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.244491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.244518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.244535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.347245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.347303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.347320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.347345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.347362 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.449998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.450052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.450070 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.450096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.450122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.552683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.552739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.552750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.552783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.552795 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.655754 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.655872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.655900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.655923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.655939 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.759161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.759226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.759244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.759269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.759288 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.862993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.863071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.863090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.863115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.863137 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.966305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.966445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.966466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.966491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:43 crc kubenswrapper[4698]: I1014 09:57:43.966508 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:43Z","lastTransitionTime":"2025-10-14T09:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.016239 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:44 crc kubenswrapper[4698]: E1014 09:57:44.016444 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.069735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.069824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.069844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.069869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.069886 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.172888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.172939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.172950 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.172968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.172981 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.276258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.276310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.276324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.276353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.276366 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.379408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.379477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.379495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.379520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.379538 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.483291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.483377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.483401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.483434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.483459 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.586253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.586302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.586314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.586331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.586342 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.689837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.689903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.689919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.689947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.689971 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.793278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.793358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.793386 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.793417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.793439 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.896856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.896909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.896925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.896951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.896968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:44Z","lastTransitionTime":"2025-10-14T09:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:44 crc kubenswrapper[4698]: I1014 09:57:44.999923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.000390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.000654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.000899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.001088 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.016840 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.016861 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.016959 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:45 crc kubenswrapper[4698]: E1014 09:57:45.017499 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:45 crc kubenswrapper[4698]: E1014 09:57:45.017663 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:45 crc kubenswrapper[4698]: E1014 09:57:45.017817 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.018144 4698 scope.go:117] "RemoveContainer" containerID="afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.056643 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.104915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.104982 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.105001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.105028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.105048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.209281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.209326 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.209343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.209369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.209381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.312615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.312681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.312700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.312723 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.312737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.347459 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/1.log" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.352243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.353256 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.375280 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.392254 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.414416 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.416127 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.416186 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.416206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.416231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.416250 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.436905 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.457506 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.478853 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.498346 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.518848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.518896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.518913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.518936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.518953 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.526848 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.552188 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.576760 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.596923 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.610124 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.621842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.622112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.622309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.622461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.622609 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.630336 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.643576 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.665724 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.679074 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:45Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.725448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.725486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.725495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.725509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.725519 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.828243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.828297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.828318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.828346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.828366 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.930330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.930379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.930390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.930407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:45 crc kubenswrapper[4698]: I1014 09:57:45.930419 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:45Z","lastTransitionTime":"2025-10-14T09:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.016622 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:46 crc kubenswrapper[4698]: E1014 09:57:46.017014 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.033543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.033591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.033689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.033709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.033718 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.136832 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.136909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.136934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.136972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.136996 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.240090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.240159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.240177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.240206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.240227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.344187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.344289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.344315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.344358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.344387 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.360030 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/2.log" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.361172 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/1.log" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.365945 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" exitCode=1 Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.366011 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.366072 4698 scope.go:117] "RemoveContainer" containerID="afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.366995 4698 scope.go:117] "RemoveContainer" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" Oct 14 09:57:46 crc kubenswrapper[4698]: E1014 09:57:46.367267 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.409376 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afec3a31416d60bb03430456c28d5bcd69c8fbdd343374e4505c602dac71b3ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:30Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 09:57:30.072732 6117 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1014 09:57:30.072819 6117 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1014 09:57:30.072828 6117 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1014 09:57:30.072868 6117 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 09:57:30.072865 6117 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 09:57:30.072876 6117 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 09:57:30.072883 6117 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 09:57:30.072901 6117 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 09:57:30.072908 6117 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 09:57:30.072893 6117 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 09:57:30.072951 6117 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 09:57:30.072957 6117 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 09:57:30.072975 6117 handler.go:208] Removed *v1.Node event handler 7\\\\nI1014 09:57:30.072993 6117 factory.go:656] Stopping watch factory\\\\nI1014 09:57:30.073012 6117 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.429556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.447641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.447755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.447800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.447830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.447855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.448545 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.472331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.492555 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.513361 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.534109 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.550593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.550666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.550689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.550722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.550745 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.553429 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.575206 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.597932 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.619313 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.640288 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.654445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.654504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.654520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.654548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.654566 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.663246 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.684849 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.709399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.726387 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:46Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.757988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.758244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.758405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.758578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.758706 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.862225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.862282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.862300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.862324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.862341 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.965331 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.965383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.965400 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.965422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:46 crc kubenswrapper[4698]: I1014 09:57:46.965442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:46Z","lastTransitionTime":"2025-10-14T09:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.016463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.016463 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.016675 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.016482 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.016730 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.016897 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.068457 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.068515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.068531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.068553 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.068570 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.171591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.171655 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.171673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.171697 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.171720 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.275448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.275536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.275561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.275595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.275619 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.372251 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/2.log" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.377177 4698 scope.go:117] "RemoveContainer" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.377436 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.379206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.379278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.379304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.379334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.379360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.398272 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.449512 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.471858 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.482339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.482377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.482388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.482413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.482429 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.486939 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.498888 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.504320 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.519358 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.536069 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.551737 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.564954 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.577951 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.584868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.584935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.584951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.584972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.584988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.588355 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.600435 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.612283 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.628169 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.638928 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.653436 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.666757 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.680289 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.687441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.687478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.687492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.687509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.687520 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.700049 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.733152 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.751400 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.767000 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.779511 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.790156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.790505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.790638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.790813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.790983 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.795395 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.816750 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.835245 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.847812 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.861932 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.879518 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.893297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.893391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.893450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.893516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.893591 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.902360 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.924731 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.944016 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:47Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.947466 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.947651 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:47 crc kubenswrapper[4698]: E1014 09:57:47.947724 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:58:03.947700511 +0000 UTC m=+65.644999937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.996960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.997032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.997050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.997115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:47 crc kubenswrapper[4698]: I1014 09:57:47.997135 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:47Z","lastTransitionTime":"2025-10-14T09:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.016616 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:48 crc kubenswrapper[4698]: E1014 09:57:48.016854 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.100094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.100141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.100156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.100175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.100187 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.202596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.202681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.202698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.202717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.202732 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.306043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.306092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.306103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.306118 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.306129 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.409055 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.409120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.409137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.409161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.409182 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.512462 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.512532 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.512554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.512587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.512608 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.615669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.615735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.615753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.615812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.615828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.649558 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.663686 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.671289 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.688352 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.702826 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.719189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.719267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.719291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.719333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.719369 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.727076 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.758015 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.771753 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.790340 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.804978 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822109 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822178 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822188 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.822892 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.841200 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.863098 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.877408 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.893459 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.912128 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.924890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.924954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.924971 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.924997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.925015 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:48Z","lastTransitionTime":"2025-10-14T09:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.932075 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:48 crc kubenswrapper[4698]: I1014 09:57:48.950744 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:48Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.016334 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.016421 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:49 crc kubenswrapper[4698]: E1014 09:57:49.016515 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.016553 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:49 crc kubenswrapper[4698]: E1014 09:57:49.016690 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:49 crc kubenswrapper[4698]: E1014 09:57:49.016858 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.027555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.027634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.027660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.027689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.027713 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.040017 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.057111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.075201 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.097640 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.107066 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.117294 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.129391 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.131212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.131238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.131246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.131261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.131270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.144081 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.165814 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.184253 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.200866 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.213472 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.231611 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.233470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.233510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.233519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.233540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.233552 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.247947 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.268399 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.284800 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.299517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:49Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.336169 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.336227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.336247 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.336272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.336290 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.439124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.439154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.439163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.439177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.439185 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.541955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.542029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.542051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.542075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.542091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.645846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.645901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.645912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.645938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.645956 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.749525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.749612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.749637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.749665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.749691 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.852654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.852702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.852711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.852727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.852736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.956361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.956434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.956452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.956475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:49 crc kubenswrapper[4698]: I1014 09:57:49.956493 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:49Z","lastTransitionTime":"2025-10-14T09:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.016131 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.016412 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.060091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.060160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.060180 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.060206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.060225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.163075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.163133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.163149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.163173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.163190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.267138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.267646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.267672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.267701 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.267725 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.371071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.371128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.371142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.371160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.371177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.473611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.473673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.473691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.473718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.473736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.576488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.576551 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.576574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.576602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.576624 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.679699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.679783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.679798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.679819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.679833 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.782744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.782846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.782871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.782899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.782920 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.815309 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.815505 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:58:22.815470274 +0000 UTC m=+84.512769730 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.815584 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.815660 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.815831 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.815901 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.815926 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:58:22.815899056 +0000 UTC m=+84.513198512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.815970 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:58:22.815950928 +0000 UTC m=+84.513250374 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.886034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.886095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.886112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.886137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.886154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.916587 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.916656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916844 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916878 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916914 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916924 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916933 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.916956 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.917017 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:58:22.916996218 +0000 UTC m=+84.614295674 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:50 crc kubenswrapper[4698]: E1014 09:57:50.917043 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:58:22.917031509 +0000 UTC m=+84.614330955 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.989198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.989271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.989294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.989323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:50 crc kubenswrapper[4698]: I1014 09:57:50.989345 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:50Z","lastTransitionTime":"2025-10-14T09:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.016945 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.017217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.017047 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:51 crc kubenswrapper[4698]: E1014 09:57:51.017599 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:51 crc kubenswrapper[4698]: E1014 09:57:51.017900 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:51 crc kubenswrapper[4698]: E1014 09:57:51.017752 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.092607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.092673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.092690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.092715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.092737 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.195559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.195628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.195652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.195681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.195702 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.298408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.298464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.298481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.298503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.298521 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.402139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.402197 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.402214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.402241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.402258 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.506315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.506388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.506408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.506440 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.506463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.610117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.610174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.610193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.610215 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.610233 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.712969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.713038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.713058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.713085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.713105 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.817870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.817929 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.817954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.817983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.818007 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.922842 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.922964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.922997 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.923038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:51 crc kubenswrapper[4698]: I1014 09:57:51.923078 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:51Z","lastTransitionTime":"2025-10-14T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.016156 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.016641 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.026393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.026450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.026474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.026506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.026527 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.129449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.129507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.129524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.129546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.129561 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.232293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.232388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.232412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.232441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.232461 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.335972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.336492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.336555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.336580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.337010 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.440100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.440163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.440175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.440191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.440203 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.524585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.524662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.524681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.524708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.524727 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.545225 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:52Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.550658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.550808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.550844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.550876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.550899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.571153 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:52Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.577021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.577084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.577106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.577134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.577158 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.597832 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:52Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.602981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.603059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.603078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.603104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.603121 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.623031 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:52Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.628478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.628550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.628568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.628594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.628614 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.648564 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:52Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:52 crc kubenswrapper[4698]: E1014 09:57:52.648941 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.650923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.650988 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.651016 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.651048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.651073 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.753554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.753603 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.753619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.753641 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.753660 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.856859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.856932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.856951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.856974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.856990 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.960136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.960206 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.960229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.960258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:52 crc kubenswrapper[4698]: I1014 09:57:52.960279 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:52Z","lastTransitionTime":"2025-10-14T09:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.016569 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.016707 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:53 crc kubenswrapper[4698]: E1014 09:57:53.016893 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.016927 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:53 crc kubenswrapper[4698]: E1014 09:57:53.017106 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:53 crc kubenswrapper[4698]: E1014 09:57:53.017389 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.062903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.062972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.062994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.063019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.063036 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.166309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.166365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.166383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.166405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.166426 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.268848 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.268922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.268935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.269012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.269024 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.375881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.375973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.375995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.376173 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.376318 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.479620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.479678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.479694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.479747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.479799 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.582863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.582934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.582955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.582984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.583006 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.685031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.685083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.685093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.685110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.685122 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.788510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.788569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.788587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.788609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.788626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.891923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.892111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.892164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.892195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.892219 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.995984 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.996038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.996057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.996082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:53 crc kubenswrapper[4698]: I1014 09:57:53.996100 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:53Z","lastTransitionTime":"2025-10-14T09:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.016062 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:54 crc kubenswrapper[4698]: E1014 09:57:54.016290 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.099349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.099408 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.099424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.099448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.099466 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.202037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.202094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.202111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.202132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.202149 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.304822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.304888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.304909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.304947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.304989 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.408582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.408657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.408680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.408709 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.408730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.511452 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.511521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.511547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.511575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.511599 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.614071 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.614157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.614190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.614223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.614243 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.716966 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.717041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.717069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.717100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.717123 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.819995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.820051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.820062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.820083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.820095 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.923194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.923254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.923276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.923303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:54 crc kubenswrapper[4698]: I1014 09:57:54.923322 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:54Z","lastTransitionTime":"2025-10-14T09:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.017111 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.017227 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:55 crc kubenswrapper[4698]: E1014 09:57:55.017288 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.017105 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:55 crc kubenswrapper[4698]: E1014 09:57:55.017411 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:55 crc kubenswrapper[4698]: E1014 09:57:55.017513 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.025822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.025877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.025894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.025915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.025932 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.128642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.128681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.128692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.128706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.128715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.231619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.231659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.231671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.231687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.231698 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.334312 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.334382 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.334404 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.334432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.334456 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.437280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.437391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.437414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.437441 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.437463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.540973 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.541388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.541581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.541817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.541995 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.644850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.644913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.644935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.645057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.645085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.747732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.747826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.747850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.747877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.747899 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.851211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.851268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.851284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.851306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.851321 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.954374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.954430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.954448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.954471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:55 crc kubenswrapper[4698]: I1014 09:57:55.954489 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:55Z","lastTransitionTime":"2025-10-14T09:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.016148 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:56 crc kubenswrapper[4698]: E1014 09:57:56.016400 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.058143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.058203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.058229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.058255 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.058279 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.161835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.161898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.161916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.161938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.161957 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.265360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.265425 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.265449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.265479 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.265501 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.368090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.368132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.368143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.368158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.368168 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.470569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.470608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.470617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.470631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.470642 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.573049 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.573092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.573103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.573119 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.573130 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.675171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.675234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.675286 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.675317 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.675338 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.778074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.778299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.778329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.778359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.778381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.881138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.881183 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.881192 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.881205 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.881214 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.984122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.984208 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.984241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.984271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:56 crc kubenswrapper[4698]: I1014 09:57:56.984291 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:56Z","lastTransitionTime":"2025-10-14T09:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.016630 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.016877 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:57 crc kubenswrapper[4698]: E1014 09:57:57.017141 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.017216 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:57 crc kubenswrapper[4698]: E1014 09:57:57.017418 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:57 crc kubenswrapper[4698]: E1014 09:57:57.017944 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.087504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.088057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.088153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.088246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.088330 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.191093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.191154 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.191170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.191194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.191213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.294357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.294703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.294891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.295111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.295282 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.398244 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.398316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.398340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.398370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.398391 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.501511 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.501567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.501588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.501615 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.501636 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.604548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.604978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.605133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.605287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.605417 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.708561 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.708611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.708620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.708635 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.708645 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.811277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.811363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.811389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.811421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.811443 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.914545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.914610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.914629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.914656 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:57 crc kubenswrapper[4698]: I1014 09:57:57.914676 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:57Z","lastTransitionTime":"2025-10-14T09:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.016011 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:57:58 crc kubenswrapper[4698]: E1014 09:57:58.016427 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.018618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.018694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.018719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.018748 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.018806 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.121694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.121731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.121742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.121756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.121780 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.224116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.224159 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.224168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.224181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.224189 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.327337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.327379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.327390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.327406 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.327416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.431010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.431061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.431078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.431102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.431120 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.534801 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.534871 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.534890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.534915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.534931 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.638006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.638069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.638088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.638114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.638132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.740705 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.740815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.740833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.740856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.740875 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.844101 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.844170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.844187 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.844211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.844230 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.946944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.947003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.947020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.947041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:58 crc kubenswrapper[4698]: I1014 09:57:58.947059 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:58Z","lastTransitionTime":"2025-10-14T09:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.016061 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.016128 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:57:59 crc kubenswrapper[4698]: E1014 09:57:59.016227 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:57:59 crc kubenswrapper[4698]: E1014 09:57:59.016441 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.016489 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:57:59 crc kubenswrapper[4698]: E1014 09:57:59.016582 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.035967 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.051803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.051872 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.051895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.051926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.051948 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.053617 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.068711 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.092201 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.124915 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.137921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.151887 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.154438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.154510 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.154534 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.154564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.154586 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.165923 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.179996 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.195914 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.211973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.226382 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.242722 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.257749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.257813 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.257824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.257841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.257852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.258374 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.276726 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.293900 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.309083 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:57:59Z is after 2025-08-24T17:21:41Z" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.359787 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.359831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.359862 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.359882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.359894 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.461732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.461805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.461817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.461833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.461843 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.564850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.564899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.564916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.564938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.564955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.667571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.667693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.667717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.667742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.667787 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.771759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.771840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.771851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.771869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.771880 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.874595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.874666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.874689 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.874722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.874744 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.977727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.977809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.977825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.977851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:57:59 crc kubenswrapper[4698]: I1014 09:57:59.977866 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:57:59Z","lastTransitionTime":"2025-10-14T09:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.015981 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:00 crc kubenswrapper[4698]: E1014 09:58:00.016844 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.017345 4698 scope.go:117] "RemoveContainer" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" Oct 14 09:58:00 crc kubenswrapper[4698]: E1014 09:58:00.017658 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.081032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.081094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.081111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.081134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.081152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.185513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.185574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.185588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.185606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.185618 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.288143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.288199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.288213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.288230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.288242 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.390960 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.391007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.391019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.391035 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.391048 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.495063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.495139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.495158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.495185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.495208 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.597252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.597371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.597384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.597407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.597418 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.700906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.700979 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.700998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.701023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.701042 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.804128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.804171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.804182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.804201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.804213 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.908189 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.908263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.908283 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.908324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:00 crc kubenswrapper[4698]: I1014 09:58:00.908343 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:00Z","lastTransitionTime":"2025-10-14T09:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.011937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.012020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.012050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.012084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.012108 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.016275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.016349 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:01 crc kubenswrapper[4698]: E1014 09:58:01.016480 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.016549 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:01 crc kubenswrapper[4698]: E1014 09:58:01.016739 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:01 crc kubenswrapper[4698]: E1014 09:58:01.016915 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.115658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.115715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.115729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.115749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.115761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.220001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.220074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.220098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.220132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.220154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.323456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.323516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.323531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.323549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.323560 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.426559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.426618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.426647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.426674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.426692 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.529254 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.529302 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.529320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.529343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.529360 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.633468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.633546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.633590 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.633626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.633652 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.739116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.739184 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.739202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.739228 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.739245 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.841688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.841758 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.841783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.841800 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.841811 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.944706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.944755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.944827 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.944854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:01 crc kubenswrapper[4698]: I1014 09:58:01.944869 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:01Z","lastTransitionTime":"2025-10-14T09:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.016318 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.016472 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.047547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.047745 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.047808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.047839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.047860 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.151303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.151346 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.151359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.151375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.151386 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.253836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.253925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.253953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.254001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.254026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.357083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.357125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.357133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.357151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.357160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.459117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.459147 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.459155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.459171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.459180 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.562436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.562469 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.562477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.562490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.562498 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.664605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.664638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.664646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.664658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.664666 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.767149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.767196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.767207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.767223 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.767269 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.798558 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.798633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.798687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.798718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.798736 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.815675 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:02Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.820278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.820363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.820379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.820402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.820419 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.837088 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:02Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.842007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.842090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.842112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.842137 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.842154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.865578 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:02Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.870422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.870490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.870512 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.870538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.870593 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.893922 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:02Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.899940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.900006 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.900027 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.900052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.900069 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.918594 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:02Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:02 crc kubenswrapper[4698]: E1014 09:58:02.918852 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.920974 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.921026 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.921043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.921067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:02 crc kubenswrapper[4698]: I1014 09:58:02.921084 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:02Z","lastTransitionTime":"2025-10-14T09:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.016384 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.016392 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.016526 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:03 crc kubenswrapper[4698]: E1014 09:58:03.016498 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:03 crc kubenswrapper[4698]: E1014 09:58:03.016565 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:03 crc kubenswrapper[4698]: E1014 09:58:03.016620 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.029311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.029349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.029360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.029380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.029390 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.132226 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.132268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.132278 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.132294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.132303 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.235612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.235703 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.235726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.235756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.235821 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.339272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.339339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.339360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.339388 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.339410 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.442750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.442828 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.442845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.442867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.442883 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.545975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.546063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.546088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.546124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.546147 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.649650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.649734 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.649759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.649835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.649859 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.752596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.752645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.752662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.752685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.752703 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.855869 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.855917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.855934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.855959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.855979 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.958538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.958611 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.958639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.958662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.958680 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:03Z","lastTransitionTime":"2025-10-14T09:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:03 crc kubenswrapper[4698]: I1014 09:58:03.978861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:03 crc kubenswrapper[4698]: E1014 09:58:03.979030 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:58:03 crc kubenswrapper[4698]: E1014 09:58:03.979146 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:58:35.979121025 +0000 UTC m=+97.676420451 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.016128 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:04 crc kubenswrapper[4698]: E1014 09:58:04.016360 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.062252 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.062410 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.062439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.062515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.062542 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.165114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.165152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.165162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.165177 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.165187 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.268212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.268274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.268291 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.268318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.268335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.371246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.371308 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.371321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.371339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.371349 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.474619 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.474698 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.474717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.474744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.474847 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.578319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.578564 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.578953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.579248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.579399 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.683107 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.683155 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.683164 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.683179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.683190 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.786482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.786528 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.786538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.786557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.786571 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.889276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.889323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.889334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.889361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.889373 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.991310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.991355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.991363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.991378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:04 crc kubenswrapper[4698]: I1014 09:58:04.991394 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:04Z","lastTransitionTime":"2025-10-14T09:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.016760 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:05 crc kubenswrapper[4698]: E1014 09:58:05.016888 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.016949 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.016993 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:05 crc kubenswrapper[4698]: E1014 09:58:05.017082 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:05 crc kubenswrapper[4698]: E1014 09:58:05.017182 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.093297 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.093339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.093351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.093368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.093379 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.195442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.195507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.195519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.195537 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.195550 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.298435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.298504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.298518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.298535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.298544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.400407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.400463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.400475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.400494 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.400504 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.502418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.502455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.502464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.502477 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.502487 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.605418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.605474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.605490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.605514 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.605534 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.708837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.708906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.708920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.708942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.708955 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.812266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.812318 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.812329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.812347 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.812359 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.915928 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.916007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.916033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.916067 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:05 crc kubenswrapper[4698]: I1014 09:58:05.916091 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:05Z","lastTransitionTime":"2025-10-14T09:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.016532 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:06 crc kubenswrapper[4698]: E1014 09:58:06.016924 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.018818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.018888 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.018903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.018947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.018961 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.121620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.122080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.122237 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.122380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.122511 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.226031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.226092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.226108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.226138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.226157 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.328901 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.329239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.329378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.329524 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.329661 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.433403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.433458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.433472 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.433490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.433502 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.471101 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/0.log" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.471151 4698 generic.go:334] "Generic (PLEG): container finished" podID="fbf10bbc-318d-4f46-83a0-fdbad9888201" containerID="4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6" exitCode=1 Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.471177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerDied","Data":"4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.471533 4698 scope.go:117] "RemoveContainer" containerID="4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.491649 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.503009 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.518921 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.534063 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.538100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.538296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.538461 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.538598 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.538731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.548828 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.564356 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.577111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.590045 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.619155 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.628911 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.640243 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.641955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.642011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.642033 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.642063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.642085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.652155 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.667219 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.680146 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.693205 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.705242 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.714981 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:06Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.744873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.744909 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.744920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.744935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.744944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.846757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.846807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.846815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.846826 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.846835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.948845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.948900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.948908 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.948927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:06 crc kubenswrapper[4698]: I1014 09:58:06.948938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:06Z","lastTransitionTime":"2025-10-14T09:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.016946 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.016994 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.016963 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:07 crc kubenswrapper[4698]: E1014 09:58:07.017091 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:07 crc kubenswrapper[4698]: E1014 09:58:07.017202 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:07 crc kubenswrapper[4698]: E1014 09:58:07.017393 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.051242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.051266 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.051274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.051287 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.051295 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.153874 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.153924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.153936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.153954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.153966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.257010 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.257084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.257096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.257124 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.257136 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.360412 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.360468 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.360489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.360521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.360544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.463711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.463831 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.463852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.463879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.463895 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.476801 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/0.log" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.476904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerStarted","Data":"52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.488673 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.499165 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.514517 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.524285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.539799 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.565257 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.566896 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.566934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.566952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.566972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.566984 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.578316 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.597651 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.612249 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.624802 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.641012 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.652950 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.664825 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.669515 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.669552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.669565 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.669583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.669594 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.681014 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.700849 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.719230 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.737351 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:07Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.772836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.772900 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.772912 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.772931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.772944 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.875258 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.875323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.875335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.875355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.875368 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.978491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.978556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.978572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.978596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:07 crc kubenswrapper[4698]: I1014 09:58:07.978614 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:07Z","lastTransitionTime":"2025-10-14T09:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.016759 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:08 crc kubenswrapper[4698]: E1014 09:58:08.017014 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.080926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.080991 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.081011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.081034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.081051 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.184378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.184829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.185015 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.185201 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.185437 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.288038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.288106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.288117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.288132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.288144 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.391305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.391330 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.391339 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.391349 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.391358 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.493569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.493620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.493631 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.493650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.493663 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.596533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.596585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.596599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.596621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.596637 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.699602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.699652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.699662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.699681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.699693 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.802618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.802679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.802696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.802724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.802742 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.905374 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.905850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.906074 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.906265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:08 crc kubenswrapper[4698]: I1014 09:58:08.906455 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:08Z","lastTransitionTime":"2025-10-14T09:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.009082 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.009188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.009210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.009231 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.009242 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.016916 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.016986 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:09 crc kubenswrapper[4698]: E1014 09:58:09.017078 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.017117 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:09 crc kubenswrapper[4698]: E1014 09:58:09.017207 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:09 crc kubenswrapper[4698]: E1014 09:58:09.017319 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.038522 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.050107 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.059973 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.070544 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.081110 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.097331 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.111212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.111268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.111279 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.111296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.111307 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.119011 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.131370 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.142466 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.152318 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.164598 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.175909 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.188593 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.200960 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.213685 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.214046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.214089 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.214100 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.214115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.214127 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.225297 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.237313 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:09Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.316552 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.316587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.316596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.316608 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.316616 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.418918 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.418957 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.418969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.418985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.418996 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.520702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.520737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.520747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.520780 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.520792 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.622852 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.622892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.622902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.622915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.622926 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.724391 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.724420 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.724428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.724439 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.724446 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.827017 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.827078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.827091 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.827108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.827120 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.929725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.929812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.929835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.929860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:09 crc kubenswrapper[4698]: I1014 09:58:09.929882 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:09Z","lastTransitionTime":"2025-10-14T09:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.017007 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:10 crc kubenswrapper[4698]: E1014 09:58:10.017221 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.032844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.032905 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.032916 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.032939 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.032951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.135724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.135835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.135860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.135890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.135918 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.238038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.238098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.238115 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.238139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.238155 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.340742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.340808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.340820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.340835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.340845 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.443664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.443732 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.443750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.443812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.443834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.547353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.547424 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.547451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.547485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.547510 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.650293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.650353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.650369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.650389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.650404 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.752899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.752936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.752949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.752963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.752974 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.855079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.855116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.855125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.855143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.855152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.957402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.957460 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.957476 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.957497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:10 crc kubenswrapper[4698]: I1014 09:58:10.957514 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:10Z","lastTransitionTime":"2025-10-14T09:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.016611 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.016698 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.016611 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:11 crc kubenswrapper[4698]: E1014 09:58:11.016844 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:11 crc kubenswrapper[4698]: E1014 09:58:11.016987 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:11 crc kubenswrapper[4698]: E1014 09:58:11.017155 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.059401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.059436 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.059444 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.059456 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.059466 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.161343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.161390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.161401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.161415 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.161424 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.266662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.266722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.266744 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.266805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.266840 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.369213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.369259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.369269 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.369284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.369294 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.471798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.471844 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.471859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.471881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.471901 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.611854 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.611940 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.611972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.612004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.612026 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.715210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.715292 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.715316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.715342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.715366 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.819165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.819224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.819241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.819272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.819289 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.921859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.921899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.921910 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.921926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:11 crc kubenswrapper[4698]: I1014 09:58:11.921940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:11Z","lastTransitionTime":"2025-10-14T09:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.016314 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:12 crc kubenswrapper[4698]: E1014 09:58:12.016468 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.024298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.024342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.024357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.024379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.024393 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.127793 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.127822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.127829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.127843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.127851 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.230507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.230607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.230625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.230643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.230656 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.333311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.333360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.333369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.333384 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.333393 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.436018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.436099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.436123 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.436153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.436176 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.539546 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.539596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.539607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.539623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.539633 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.644212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.644294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.644310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.644334 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.645303 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.772850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.772902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.772915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.772932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.772945 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.876529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.876588 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.876607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.876633 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.876652 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.979397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.979448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.979466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.979490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:12 crc kubenswrapper[4698]: I1014 09:58:12.979509 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:12Z","lastTransitionTime":"2025-10-14T09:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.016591 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.016677 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.016834 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.016992 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.017150 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.017271 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.043405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.043471 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.043495 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.043530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.043556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.058389 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:13Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.062818 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.062877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.062893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.062921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.062938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.076794 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:13Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.081482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.081550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.081572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.081602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.081626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.098096 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:13Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.102402 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.102445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.102459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.102480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.102496 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.117884 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:13Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.122523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.122567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.122585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.122605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.122618 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.137091 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:13Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:13 crc kubenswrapper[4698]: E1014 09:58:13.137245 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.139141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.139175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.139191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.139210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.139223 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.242969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.243043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.243060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.243085 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.243101 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.346625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.346711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.346737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.346808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.346835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.450529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.450582 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.450596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.450617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.450629 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.554200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.554249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.554259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.554276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.554290 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.657845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.657899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.657937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.657959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.657972 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.761446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.761890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.761993 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.762086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.762383 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.866072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.866116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.866132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.866150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.866160 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.969740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.969825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.969840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.969860 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:13 crc kubenswrapper[4698]: I1014 09:58:13.969872 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:13Z","lastTransitionTime":"2025-10-14T09:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.016894 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:14 crc kubenswrapper[4698]: E1014 09:58:14.017018 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.073343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.073383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.073392 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.073407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.073416 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.177980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.178416 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.178499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.178580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.178644 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.281919 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.281968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.281985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.282007 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.282035 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.385803 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.385885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.385913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.385945 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.385963 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.489281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.489716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.489937 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.490093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.490250 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.593498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.593572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.593600 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.593636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.593662 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.696819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.697725 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.697965 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.698162 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.698353 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.801427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.801486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.801503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.801526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.801544 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.904696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.904879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.904917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.904951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:14 crc kubenswrapper[4698]: I1014 09:58:14.904975 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:14Z","lastTransitionTime":"2025-10-14T09:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.007753 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.007836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.007853 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.007875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.007891 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.017116 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.017201 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:15 crc kubenswrapper[4698]: E1014 09:58:15.017467 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.017513 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:15 crc kubenswrapper[4698]: E1014 09:58:15.017665 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:15 crc kubenswrapper[4698]: E1014 09:58:15.017837 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.018869 4698 scope.go:117] "RemoveContainer" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.110299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.111047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.111084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.111114 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.111135 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.214359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.214434 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.214463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.214499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.214526 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.317370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.317419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.317433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.317459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.317472 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.420383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.420455 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.420473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.420503 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.420525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.509667 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/2.log" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.512831 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.513915 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.534418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.534474 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.535438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.535489 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.535516 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.538410 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.557086 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.570192 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.581213 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.593924 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.613969 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.627454 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.638450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.638478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.638490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.638509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.638523 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.641525 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.656417 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.670348 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.685819 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.702016 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.716732 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.730123 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.741665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.741713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.741728 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.741749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.741762 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.750097 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.770159 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.787741 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.844972 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.845034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.845051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.845080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.845099 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.948498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.948570 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.948593 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.948622 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:15 crc kubenswrapper[4698]: I1014 09:58:15.948644 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:15Z","lastTransitionTime":"2025-10-14T09:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.016870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:16 crc kubenswrapper[4698]: E1014 09:58:16.017123 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.051870 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.051922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.051933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.051954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.051966 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.154891 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.154964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.154983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.155009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.155027 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.258923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.258994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.259014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.259041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.259060 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.362757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.362863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.362877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.362902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.362921 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.466632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.466716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.466735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.466794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.466835 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.519484 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/3.log" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.520607 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/2.log" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.524967 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" exitCode=1 Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.525024 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.525081 4698 scope.go:117] "RemoveContainer" containerID="2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.526403 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:58:16 crc kubenswrapper[4698]: E1014 09:58:16.526721 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.552232 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.568718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.568825 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.568849 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.568881 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.568905 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.571687 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.591146 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.608285 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.625223 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.640496 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.664901 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.671786 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.671850 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.671868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.671889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.671910 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.694177 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb5dca43ec621632afbbbe21069c7f48b2b98b12882cb4ff5b9bf60ddf6bce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:57:46Z\\\",\\\"message\\\":\\\"ketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.53\\\\\\\", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 09:57:46.069019 6340 services_controller.go:452] Built service openshift-marketplace/marketplace-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI1014 09:57:46.068992 6340 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:16Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z]\\\\nI1014 09:58:16.059589 6703 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.706202 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.719068 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.733043 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.746066 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.758219 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.769596 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.774239 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.774277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.774289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.774306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.774319 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.781469 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.791876 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.804829 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:16Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.878227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.878300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.878322 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.878543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.878559 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.980985 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.981045 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.981057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.981076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:16 crc kubenswrapper[4698]: I1014 09:58:16.981088 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:16Z","lastTransitionTime":"2025-10-14T09:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.016011 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.016106 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.016109 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:17 crc kubenswrapper[4698]: E1014 09:58:17.016286 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:17 crc kubenswrapper[4698]: E1014 09:58:17.016448 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:17 crc kubenswrapper[4698]: E1014 09:58:17.016628 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.083149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.083214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.083233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.083260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.083277 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.185946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.186019 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.186036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.186059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.186077 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.289859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.289915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.289931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.289953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.289970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.392843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.392913 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.392930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.392953 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.392970 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.496075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.496151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.496174 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.496202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.496226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.531860 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/3.log" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.538563 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:58:17 crc kubenswrapper[4698]: E1014 09:58:17.538919 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.556556 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.573371 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.598176 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.599052 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.599117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.599135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.599160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.599177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.634115 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:16Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z]\\\\nI1014 09:58:16.059589 6703 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:58:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.653643 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.670238 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.683151 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.704946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.704986 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.704998 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.705051 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.705065 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.719102 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.748900 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.766390 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.782307 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.796187 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.810628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.810673 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.810687 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.810708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.810721 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.814561 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.831351 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.846287 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.860074 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.872080 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:17Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.913012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.913087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.913111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.913141 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:17 crc kubenswrapper[4698]: I1014 09:58:17.913158 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:17Z","lastTransitionTime":"2025-10-14T09:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.015873 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:18 crc kubenswrapper[4698]: E1014 09:58:18.016045 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.016560 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.016626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.016642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.016666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.016684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.119668 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.119710 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.119718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.119735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.119744 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.222340 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.222389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.222398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.222413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.222422 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.325379 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.325463 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.325481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.325507 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.325525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.428981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.429062 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.429088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.429116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.429138 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.532819 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.532914 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.532938 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.532962 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.532978 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.635684 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.635834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.636116 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.636438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.636503 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.739531 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.739621 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.739639 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.739662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.739678 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.842442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.842497 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.842517 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.842539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.842556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.945906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.945980 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.946002 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.946031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:18 crc kubenswrapper[4698]: I1014 09:58:18.946053 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:18Z","lastTransitionTime":"2025-10-14T09:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.016729 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.017025 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:19 crc kubenswrapper[4698]: E1014 09:58:19.017239 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.017353 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:19 crc kubenswrapper[4698]: E1014 09:58:19.017443 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:19 crc kubenswrapper[4698]: E1014 09:58:19.017718 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.035585 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.048846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.048935 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.048955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.049390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.049828 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.055654 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.073077 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.097304 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.130993 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:16Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z]\\\\nI1014 09:58:16.059589 6703 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:58:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.151564 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.153911 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.153961 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.153978 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.154001 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.154017 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.169815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.188058 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.208227 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.228068 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.248739 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.257170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.257230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.257246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.257271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.257288 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.267141 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.283355 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.300506 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.318534 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.334967 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.349294 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:19Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.361188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.361229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.361241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.361259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.361270 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.463280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.464034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.464175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.464310 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.464430 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.568567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.568907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.569087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.569271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.569445 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.673073 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.673150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.673170 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.673199 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.673217 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.776261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.776337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.776359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.776383 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.776400 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.879530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.879594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.879612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.879637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.879654 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.983249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.983313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.983333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.983357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:19 crc kubenswrapper[4698]: I1014 09:58:19.983376 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:19Z","lastTransitionTime":"2025-10-14T09:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.016387 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:20 crc kubenswrapper[4698]: E1014 09:58:20.016578 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.085893 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.086274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.086298 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.086319 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.086333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.189188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.189251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.189273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.189306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.189331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.292934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.293009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.293022 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.293040 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.293052 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.395967 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.396044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.396065 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.396096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.396119 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.499293 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.499335 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.499343 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.499356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.499364 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.602075 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.602132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.602146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.602165 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.602177 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.705643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.705700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.705715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.705736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.705752 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.809795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.809878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.809898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.809923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.809940 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.912273 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.912306 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.912315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.912327 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:20 crc kubenswrapper[4698]: I1014 09:58:20.912337 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:20Z","lastTransitionTime":"2025-10-14T09:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016092 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016106 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016132 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016289 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.016356 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:21 crc kubenswrapper[4698]: E1014 09:58:21.016457 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:21 crc kubenswrapper[4698]: E1014 09:58:21.016597 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:21 crc kubenswrapper[4698]: E1014 09:58:21.016695 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.119272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.119320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.119332 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.119350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.119362 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.223095 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.223185 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.223210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.223248 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.223272 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.326729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.326856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.326887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.326922 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.326949 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.431596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.431662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.431681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.431711 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.431731 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.534047 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.534112 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.534130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.534161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.534180 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.637540 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.637614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.637636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.637666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.637688 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.741530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.741597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.741617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.741645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.741667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.844784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.844841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.844856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.844883 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.844901 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.948509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.948591 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.948612 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.948645 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:21 crc kubenswrapper[4698]: I1014 09:58:21.948667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:21Z","lastTransitionTime":"2025-10-14T09:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.017000 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.017296 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.053083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.053151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.053168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.053232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.053255 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.157233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.157288 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.157301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.157321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.157333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.260877 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.260943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.260955 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.260975 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.260988 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.364361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.364449 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.364473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.364499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.364521 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.474478 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.474538 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.474555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.474581 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.474600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.578344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.578407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.578427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.578451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.578469 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.681480 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.681533 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.681545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.681562 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.681577 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.784214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.784289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.784301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.784316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.784327 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.887139 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.887276 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.887310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.887412 4698 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.887461 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:59:26.887443129 +0000 UTC m=+148.584742545 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.887724 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:26.887713267 +0000 UTC m=+148.585012683 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.887807 4698 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.887835 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 09:59:26.88782665 +0000 UTC m=+148.585126066 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.887990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.888012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.888021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.888036 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.888046 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.988355 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.988432 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988631 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988667 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988680 4698 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988714 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988742 4698 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988742 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 09:59:26.988723391 +0000 UTC m=+148.686022807 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.988972 4698 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:58:22 crc kubenswrapper[4698]: E1014 09:58:22.989141 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 09:59:26.989113002 +0000 UTC m=+148.686412458 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.990148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.990182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.990195 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.990212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:22 crc kubenswrapper[4698]: I1014 09:58:22.990223 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:22Z","lastTransitionTime":"2025-10-14T09:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.016924 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.016928 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.016958 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.017032 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.017093 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.017181 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.092447 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.092547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.092569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.092596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.092615 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.171131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.171198 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.171217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.171242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.171260 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.190671 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.195626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.195660 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.195669 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.195685 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.195695 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.209349 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.214099 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.214132 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.214143 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.214161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.214173 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.230743 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.235108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.235146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.235157 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.235175 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.235187 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.249413 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.253224 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.253242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.253249 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.253260 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.253267 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.263357 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:23Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:23 crc kubenswrapper[4698]: E1014 09:58:23.263455 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.264949 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.264968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.264977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.264987 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.264994 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.368368 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.368430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.368453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.368485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.368508 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.470990 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.471069 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.471086 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.471111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.471129 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.574674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.574742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.574789 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.574822 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.574847 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.677788 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.677839 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.677856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.677878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.677895 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.781943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.782084 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.782102 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.782125 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.782142 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.886688 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.887156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.887345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.888131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.888333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.993163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.993222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.993242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.993272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:23 crc kubenswrapper[4698]: I1014 09:58:23.993290 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:23Z","lastTransitionTime":"2025-10-14T09:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.016031 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:24 crc kubenswrapper[4698]: E1014 09:58:24.016196 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.033051 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.096702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.096795 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.096812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.096836 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.096855 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.199150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.199229 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.199253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.199389 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.199487 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.302811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.302875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.302898 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.302927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.302948 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.405956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.406028 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.406050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.406080 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.406102 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.508541 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.508607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.508630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.508663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.508688 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.612282 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.612363 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.612387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.612418 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.612443 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.715435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.715527 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.715556 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.715587 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.715605 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.818808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.818882 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.818906 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.818934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.818951 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.926579 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.926638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.926659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.926686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:24 crc kubenswrapper[4698]: I1014 09:58:24.926707 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:24Z","lastTransitionTime":"2025-10-14T09:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.016439 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:25 crc kubenswrapper[4698]: E1014 09:58:25.016620 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.016593 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.016660 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:25 crc kubenswrapper[4698]: E1014 09:58:25.016951 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:25 crc kubenswrapper[4698]: E1014 09:58:25.017036 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.030020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.030076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.030094 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.030117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.030135 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.133554 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.133620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.133640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.133664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.133683 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.236666 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.236718 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.236736 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.236760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.236803 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.340321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.340397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.340417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.340443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.340463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.443680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.443737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.443756 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.443812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.443830 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.547117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.547166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.547181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.547210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.547227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.651662 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.651729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.651750 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.651817 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.651842 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.755377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.755448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.755464 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.755491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.755508 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.858041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.858090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.858110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.858134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.858154 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.960755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.960851 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.960863 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.960878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:25 crc kubenswrapper[4698]: I1014 09:58:25.960889 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:25Z","lastTransitionTime":"2025-10-14T09:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.016457 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:26 crc kubenswrapper[4698]: E1014 09:58:26.016839 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.063251 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.063314 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.063333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.063355 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.063371 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.166956 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.167014 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.167031 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.167057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.167080 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.270032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.270139 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.270191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.270219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.270237 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.373156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.373225 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.373243 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.373268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.373285 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.476574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.476647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.476664 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.476691 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.476715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.579048 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.579103 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.579117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.579136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.579151 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.682234 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.682289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.682300 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.682320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.682331 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.784158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.784196 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.784204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.784217 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.784226 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.887676 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.887733 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.887752 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.887815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.887840 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.990624 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.990667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.990678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.990693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:26 crc kubenswrapper[4698]: I1014 09:58:26.990704 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:26Z","lastTransitionTime":"2025-10-14T09:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.016301 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.016404 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:27 crc kubenswrapper[4698]: E1014 09:58:27.016577 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:27 crc kubenswrapper[4698]: E1014 09:58:27.016928 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.017125 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:27 crc kubenswrapper[4698]: E1014 09:58:27.017428 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.093265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.093325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.093341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.093365 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.093382 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.196749 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.196812 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.196824 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.196840 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.196853 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.300566 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.300618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.300630 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.300649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.300661 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.403397 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.403675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.403785 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.403895 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.403986 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.506516 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.506606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.506625 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.506648 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.506694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.609377 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.609458 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.609482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.609513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.609538 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.711513 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.711595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.711617 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.711650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.711672 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.814311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.814373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.814393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.814414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.814428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.917325 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.917390 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.917403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.917419 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:27 crc kubenswrapper[4698]: I1014 09:58:27.917428 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:27Z","lastTransitionTime":"2025-10-14T09:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.015941 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:28 crc kubenswrapper[4698]: E1014 09:58:28.016117 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.020309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.020345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.020357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.020371 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.020386 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.123450 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.123518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.123529 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.123557 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.123567 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.226072 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.226122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.226133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.226152 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.226163 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.329499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.329559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.329575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.329599 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.329617 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.433203 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.433272 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.433290 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.433315 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.433335 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.535995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.536117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.536138 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.536163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.536220 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.638342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.638403 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.638428 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.638459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.638483 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.740360 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.740427 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.740437 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.740473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.740483 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.844422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.844467 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.844475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.844490 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.844499 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.948542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.948596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.948613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.948638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:28 crc kubenswrapper[4698]: I1014 09:58:28.948657 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:28Z","lastTransitionTime":"2025-10-14T09:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.017316 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.017365 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.017414 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:29 crc kubenswrapper[4698]: E1014 09:58:29.018581 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:29 crc kubenswrapper[4698]: E1014 09:58:29.018683 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:29 crc kubenswrapper[4698]: E1014 09:58:29.019020 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.031750 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b339570690752cfbe63e358065d3b58ef84c6708729abc1f590ff4c880521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.045746 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.051190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.051230 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.051241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.051256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.051266 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.060480 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bbd953b34291d62c5b40ab6339c4709be89d1983748419cabf1adff245af77f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5dcb74bf1caafb12296355d176dd1cda1c06ecea8293c0cc36d517213fe6bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.074649 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.093526 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b7cbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbf10bbc-318d-4f46-83a0-fdbad9888201\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:06Z\\\",\\\"message\\\":\\\"2025-10-14T09:57:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00\\\\n2025-10-14T09:57:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1354d92-2895-4535-acf9-93f36eacbc00 to /host/opt/cni/bin/\\\\n2025-10-14T09:57:21Z [verbose] multus-daemon started\\\\n2025-10-14T09:57:21Z [verbose] Readiness Indicator file check\\\\n2025-10-14T09:58:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:58:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkghj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b7cbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.110510 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c359a8fc-1e2f-49af-8da2-719d52bd969a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://082e0edeedb2cabe07e14812f2406a98f39fdf9923f562c33ccc9a761d0e2472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lp4sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.123319 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f5ac86-35f8-416c-bbfe-1e182975ec5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f52v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jbpnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.137417 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232cefb9-89e8-4928-ba55-bd28d5dee277\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7fbbcdbffb44a782f00ba7a7f2c25834a8c584a58be76e6452d90107abd977c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2827359bbda868834d7af92972382d98a93c2acfc73571418bf244abcd47c2c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2827359bbda868834d7af92972382d98a93c2acfc73571418bf244abcd47c2c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.152830 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32240a01-1692-4f43-aebd-2b04d9b2435e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fcdc7699fee72d9dbf3ad7df185ac8c884125781080739e4245128af2e41040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d32efa8c63456d23b3824db4a4debb51fa4d13d1fd5eb6a488b9392d96073435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e7d6532fda47f50da010b729a9108961a3d5c79191517a86f00714ccb1163\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a013bd7c3d0aeb5289ee602dd8fcbff2b98aae30b99ccc8d87605732b7f469eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8cea8367d4864478508221648a52582ee794ca4eb7b48f112ca0d78d97c2f7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"file observer\\\\nW1014 09:57:17.768150 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 09:57:17.768243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 09:57:17.769073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-133410875/tls.crt::/tmp/serving-cert-133410875/tls.key\\\\\\\"\\\\nI1014 09:57:18.101848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 09:57:18.107433 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 09:57:18.107456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 09:57:18.107479 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 09:57:18.107484 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 09:57:18.112858 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 09:57:18.112875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 09:57:18.112883 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 09:57:18.112893 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 09:57:18.112896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 09:57:18.112900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 09:57:18.112903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 09:57:18.114133 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d12c9c114016d1302fa97515b0a59a82a4524b77e5d22cbec155beb6a52bce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7a250bbbc63f0ce78ae1c27f82643a02dc264c3b83555a4cc72a788eda52d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.155350 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.155394 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.155405 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.155423 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.155435 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.169815 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32a7378b-ad21-49a8-9382-e7e5fbffb2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45beb2bbd344fbd265eca554410f31c493b55815bcac8dbb8c30c592b40dcfe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a23f4949446d47321258b172e07f6c592cd9921650920f8af93a5d749a61dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc70d614848f0b4bc342659099f9c617db052ebb995716f9cdb1113f4f78901\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c3764056ab55aa9f75e0bf62c0c7eed5440e9d580087ca9ce266611dd8c292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.186654 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.203354 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaee81debae3e251e075e61aca075c075d32fd976f1977adecba0c834e89cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.216988 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8rj7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d7ebe7-24ac-4bb6-be80-db147dc1c604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e44a1413e71cab92d1b5ee4c81cc65edc9925fa1e888d5d5670e55e1b60ee81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95sjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8rj7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.233599 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d453303d-af1d-4978-b4c5-ff12afadeb28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d39574739cd1bd3498e575ad5961a3f656110963427ec6a5862160167dad9f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8b1dd92b7f979c51da79a23ddffeb82d73f170ded78b0995eca65ea14abbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d68496a42ff5fb8359e4fd222a0e18e79e6ca885844d544d172e324325ee02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fceda73199ee36e7cfe0b0bc827b42f7cc77db6b2c987828ffb57cacd1b8a6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:56:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:56:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:56:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.244174 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pfxrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00aa977-8736-4b4d-8d58-c3d13879c49a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4adf04967efc193a93227043cfb6bb56f0ec1a7af90e351b7a618e582ddd8c41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9lxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pfxrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.258429 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.258473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.258484 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.258499 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.258512 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.266752 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5twvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e9f983f-10a0-43b7-8590-346577a561ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03bc09f64706818fd5d6b0d3eeea206f665d7cdb12ce6ae31c4706a3936dd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9090d92cdf10a884e39938f3d7908d63cb91ee3ea7832cb788aecaa47c70d753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20e7809019d6fa43f65e27abe934ed5dfc51a3afb0a2d9391fa4edd67fc03ac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c61e207f4234cf9a59bd787fc476ff4bc2d1abd6f299b2855970981e5783398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1abeca9df52e5628432a4cb0be0f212a643a644abfe303559ec7d90f3fd3955a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0654ea71914d478220611495dbefd1e47b1309aa18c6a05b21a4a45f0d8f1ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5307b806fcee628d7552a399a950a096745a8eb67c3f72c5a10854f042b992a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5twvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.288111 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02f5359-81fc-4261-b995-e58c78bcec0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T09:58:16Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:15Z is after 2025-08-24T17:21:41Z]\\\\nI1014 09:58:16.059589 6703 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T09:58:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T09:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T09:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjlwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hspfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.300381 4698 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dba065f3-2084-442a-9f77-a1dfb007aa0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T09:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d90487b005df3b6a9b8a3cbb8860b13fcc2ff7ca56979998ab7258f10b681c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bfa794859b5471a4dfe11ac227eb36674c3136b170fafdecaf916eb54d63a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T09:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtc5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T09:57:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ndfs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:29Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.365890 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.365931 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.365943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.365959 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.365972 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.468078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.468113 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.468121 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.468134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.468144 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.571694 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.571729 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.571737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.571751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.571760 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.675018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.675066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.675076 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.675093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.675105 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.777798 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.777864 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.777885 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.777917 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.777938 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.881295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.881344 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.881356 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.881373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.881386 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.983693 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.983731 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.983741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.983755 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:29 crc kubenswrapper[4698]: I1014 09:58:29.983786 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:29Z","lastTransitionTime":"2025-10-14T09:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.016623 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:30 crc kubenswrapper[4698]: E1014 09:58:30.016885 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.086245 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.086320 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.086342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.086372 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.086389 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.189889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.189934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.189944 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.189964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.189977 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.292661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.292700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.292708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.292724 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.292735 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.395296 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.395337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.395345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.395362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.395372 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.498661 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.498730 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.498747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.498807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.498832 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.601422 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.601473 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.601486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.601509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.601526 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.704144 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.704214 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.704232 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.704253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.704265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.807470 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.807530 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.807549 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.807571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.807588 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.910309 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.910370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.910387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.910414 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:30 crc kubenswrapper[4698]: I1014 09:58:30.910432 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:30Z","lastTransitionTime":"2025-10-14T09:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.013628 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.013683 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.013696 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.013717 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.013730 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.016313 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.016455 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.016848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:31 crc kubenswrapper[4698]: E1014 09:58:31.016964 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:31 crc kubenswrapper[4698]: E1014 09:58:31.017081 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.017193 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:58:31 crc kubenswrapper[4698]: E1014 09:58:31.017189 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:31 crc kubenswrapper[4698]: E1014 09:58:31.018005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.116983 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.117058 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.117081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.117110 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.117131 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.219443 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.219523 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.219545 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.219578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.219600 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.322808 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.322878 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.322894 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.322920 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.322937 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.426029 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.426108 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.426130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.426171 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.426193 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.528757 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.528903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.528933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.528964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.528985 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.632148 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.632222 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.632241 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.632264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.632283 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.735407 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.735522 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.735547 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.735583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.735609 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.838602 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.838637 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.838644 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.838657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.838667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.941594 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.941672 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.941690 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.941716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:31 crc kubenswrapper[4698]: I1014 09:58:31.941733 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:31Z","lastTransitionTime":"2025-10-14T09:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.015935 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:32 crc kubenswrapper[4698]: E1014 09:58:32.016108 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.044488 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.044539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.044555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.044574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.044585 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.146597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.146652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.146663 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.146678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.146688 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.249253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.249321 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.249337 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.249358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.249375 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.352943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.353018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.353038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.353061 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.353078 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.457657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.457741 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.457805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.457837 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.457859 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.560794 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.560846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.560859 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.560876 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.560889 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.663783 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.663857 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.663875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.663903 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.663919 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.767580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.767642 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.767654 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.767677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.767694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.871417 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.871475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.871486 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.871509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.871525 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.975057 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.975158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.975193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.975235 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:32 crc kubenswrapper[4698]: I1014 09:58:32.975265 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:32Z","lastTransitionTime":"2025-10-14T09:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.016386 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.016544 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.016544 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.016743 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.016954 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.017144 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.077868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.077932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.077943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.077964 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.077976 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.181567 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.181679 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.181708 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.181751 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.181818 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.285719 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.285829 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.285855 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.285889 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.285912 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.381168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.381259 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.381281 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.381311 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.381333 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.401850 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:33Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.407104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.407161 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.407256 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.407329 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.407361 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.430495 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:33Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.436393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.436475 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.436492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.436518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.436535 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.459261 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:33Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.464595 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.464740 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.464833 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.464879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.465116 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.487358 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:33Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.493830 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.493933 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.493951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.494011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.494148 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.518532 4698 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T09:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ec98e803-8937-4fdb-8662-3488c6a305f2\\\",\\\"systemUUID\\\":\\\"8e872109-adee-4b6d-91bf-d9ced28af93f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T09:58:33Z is after 2025-08-24T17:21:41Z" Oct 14 09:58:33 crc kubenswrapper[4698]: E1014 09:58:33.518799 4698 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.522088 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.522149 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.522168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.522200 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.522220 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.625845 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.625923 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.625943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.626011 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.626031 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.729596 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.729658 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.729675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.729700 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.729715 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.832574 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.832616 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.832627 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.832643 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.832654 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.935459 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.935542 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.935578 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.935609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:33 crc kubenswrapper[4698]: I1014 09:58:33.935629 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:33Z","lastTransitionTime":"2025-10-14T09:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.016847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:34 crc kubenswrapper[4698]: E1014 09:58:34.017083 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.039387 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.039519 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.039539 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.039568 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.039893 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.143380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.143448 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.143466 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.143491 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.143510 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.246210 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.246263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.246277 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.246299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.246314 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.348572 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.348605 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.348614 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.348626 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.348634 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.452135 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.452610 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.452634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.452665 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.452682 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.556623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.556746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.556806 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.556841 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.556863 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.660835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.660904 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.660925 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.660951 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.660968 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.764060 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.764151 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.764168 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.764191 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.764204 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.867146 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.867227 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.867246 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.867270 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.867288 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.970856 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.970921 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.970934 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.970952 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:34 crc kubenswrapper[4698]: I1014 09:58:34.970965 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:34Z","lastTransitionTime":"2025-10-14T09:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.016853 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.016883 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:35 crc kubenswrapper[4698]: E1014 09:58:35.017040 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.017127 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:35 crc kubenswrapper[4698]: E1014 09:58:35.017263 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:35 crc kubenswrapper[4698]: E1014 09:58:35.017381 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.074093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.074156 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.074172 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.074193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.074210 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.178204 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.178485 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.178509 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.178535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.178556 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.281958 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.282044 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.282066 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.282090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.282107 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.384535 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.384585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.384597 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.384613 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.384626 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.487943 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.488021 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.488039 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.488063 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.488083 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.591702 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.591759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.591805 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.591834 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.591852 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.694299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.694361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.694376 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.694393 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.694406 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.797445 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.797526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.797550 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.797584 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.797608 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.900093 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.900131 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.900140 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.900153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:35 crc kubenswrapper[4698]: I1014 09:58:35.900162 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:35Z","lastTransitionTime":"2025-10-14T09:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.002873 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.002926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.002942 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.002963 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.002980 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.016492 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:36 crc kubenswrapper[4698]: E1014 09:58:36.016707 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.036129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:36 crc kubenswrapper[4698]: E1014 09:58:36.036347 4698 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:58:36 crc kubenswrapper[4698]: E1014 09:58:36.036436 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs podName:41f5ac86-35f8-416c-bbfe-1e182975ec5c nodeName:}" failed. No retries permitted until 2025-10-14 09:59:40.03641147 +0000 UTC m=+161.733710926 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs") pod "network-metrics-daemon-jbpnj" (UID: "41f5ac86-35f8-416c-bbfe-1e182975ec5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.106213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.106264 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.106280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.106303 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.106321 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.209947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.210142 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.210181 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.210211 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.210231 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.314294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.314357 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.314373 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.314396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.314414 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.418433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.418498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.418678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.418727 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.418746 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.522284 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.522362 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.522380 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.522409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.522431 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.625153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.625216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.625233 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.625261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.625279 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.728632 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.728807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.728846 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.728879 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.728908 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.832285 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.832353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.832366 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.832385 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.832398 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.935316 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.935370 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.935381 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.935396 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:36 crc kubenswrapper[4698]: I1014 09:58:36.935405 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:36Z","lastTransitionTime":"2025-10-14T09:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.016076 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.016138 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.016268 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:37 crc kubenswrapper[4698]: E1014 09:58:37.016270 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:37 crc kubenswrapper[4698]: E1014 09:58:37.016409 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:37 crc kubenswrapper[4698]: E1014 09:58:37.016473 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.037996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.038037 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.038046 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.038059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.038069 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.140589 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.140657 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.140678 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.140706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.140726 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.243207 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.243276 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.243295 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.243323 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.243341 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.346629 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.346706 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.346722 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.346746 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.346761 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.449640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.449699 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.449716 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.449739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.449789 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.552305 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.552351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.552361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.552378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.552390 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.655543 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.655606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.655623 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.655650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.655667 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.758426 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.758481 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.758498 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.758521 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.758537 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.860715 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.860790 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.860802 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.860820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.860834 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.963609 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.963686 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.963713 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.963743 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:37 crc kubenswrapper[4698]: I1014 09:58:37.963799 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:37Z","lastTransitionTime":"2025-10-14T09:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.016420 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:38 crc kubenswrapper[4698]: E1014 09:58:38.016626 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.067677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.067747 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.067760 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.067835 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.067856 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.170995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.171090 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.171111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.171134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.171152 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.274667 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.274726 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.274737 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.274784 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.274802 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.377555 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.377638 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.377652 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.377680 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.377697 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.480526 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.480575 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.480585 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.480601 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.480611 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.583020 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.583079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.583096 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.583120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.583138 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.685267 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.685358 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.685398 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.685435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.685460 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.788261 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.788351 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.788375 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.788401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.788420 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.890954 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.891023 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.891038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.891059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.891069 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.993525 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.993620 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.993646 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.993681 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:38 crc kubenswrapper[4698]: I1014 09:58:38.993703 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:38Z","lastTransitionTime":"2025-10-14T09:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.016377 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.016473 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.016490 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:39 crc kubenswrapper[4698]: E1014 09:58:39.016576 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:39 crc kubenswrapper[4698]: E1014 09:58:39.016984 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:39 crc kubenswrapper[4698]: E1014 09:58:39.018179 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.050731 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.050708927 podStartE2EDuration="51.050708927s" podCreationTimestamp="2025-10-14 09:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.050474881 +0000 UTC m=+100.747774327" watchObservedRunningTime="2025-10-14 09:58:39.050708927 +0000 UTC m=+100.748008363" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.067719 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pfxrp" podStartSLOduration=82.067687805 podStartE2EDuration="1m22.067687805s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.06751034 +0000 UTC m=+100.764809766" watchObservedRunningTime="2025-10-14 09:58:39.067687805 +0000 UTC m=+100.764987261" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.094975 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5twvn" podStartSLOduration=81.094954379 podStartE2EDuration="1m21.094954379s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.09427667 +0000 UTC m=+100.791576146" watchObservedRunningTime="2025-10-14 09:58:39.094954379 +0000 UTC m=+100.792253835" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.097580 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.097636 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.097653 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.097677 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.097693 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.148264 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ndfs7" podStartSLOduration=80.148246242 podStartE2EDuration="1m20.148246242s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.148160829 +0000 UTC m=+100.845460295" watchObservedRunningTime="2025-10-14 09:58:39.148246242 +0000 UTC m=+100.845545658" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.176346 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podStartSLOduration=82.176327919 podStartE2EDuration="1m22.176327919s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.162953265 +0000 UTC m=+100.860252721" watchObservedRunningTime="2025-10-14 09:58:39.176327919 +0000 UTC m=+100.873627335" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.201692 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.201742 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.201759 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.201807 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.201827 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.273705 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b7cbk" podStartSLOduration=81.273683448 podStartE2EDuration="1m21.273683448s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.273287557 +0000 UTC m=+100.970587033" watchObservedRunningTime="2025-10-14 09:58:39.273683448 +0000 UTC m=+100.970982874" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.287652 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.287634329 podStartE2EDuration="15.287634329s" podCreationTimestamp="2025-10-14 09:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.287584808 +0000 UTC m=+100.984884244" watchObservedRunningTime="2025-10-14 09:58:39.287634329 +0000 UTC m=+100.984933745" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.305041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.305098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.305111 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.305128 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.305141 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.307674 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.307657065 podStartE2EDuration="1m20.307657065s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.306269395 +0000 UTC m=+101.003568851" watchObservedRunningTime="2025-10-14 09:58:39.307657065 +0000 UTC m=+101.004956491" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.335466 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.335444774 podStartE2EDuration="1m19.335444774s" podCreationTimestamp="2025-10-14 09:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.321941026 +0000 UTC m=+101.019240442" watchObservedRunningTime="2025-10-14 09:58:39.335444774 +0000 UTC m=+101.032744200" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.367489 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8rj7q" podStartSLOduration=82.367469415 podStartE2EDuration="1m22.367469415s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:39.367061133 +0000 UTC m=+101.064360589" watchObservedRunningTime="2025-10-14 09:58:39.367469415 +0000 UTC m=+101.064768841" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.408242 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.408301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.408313 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.408333 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.408348 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.511996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.512043 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.512059 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.512081 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.512098 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.615504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.615563 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.615583 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.615607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.615625 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.719924 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.719994 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.720013 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.720034 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.720049 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.824136 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.824182 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.824194 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.824212 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.824225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.929647 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.929735 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.929815 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.929843 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:39 crc kubenswrapper[4698]: I1014 09:58:39.929859 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:39Z","lastTransitionTime":"2025-10-14T09:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.017311 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:40 crc kubenswrapper[4698]: E1014 09:58:40.017571 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.033190 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.033253 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.033271 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.033294 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.033311 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.040845 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.137160 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.137221 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.137238 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.137275 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.137292 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.240809 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.240867 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.240892 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.240932 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.240954 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.344341 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.344430 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.344453 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.344483 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.344502 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.447395 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.447482 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.447505 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.447536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.447559 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.550907 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.550968 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.550981 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.551003 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.551024 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.653999 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.654104 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.654130 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.654158 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.654181 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.757361 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.757421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.757433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.757451 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.757461 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.860345 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.860401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.860413 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.860432 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.860442 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.962948 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.963005 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.963017 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.963038 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:40 crc kubenswrapper[4698]: I1014 09:58:40.963049 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:40Z","lastTransitionTime":"2025-10-14T09:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.016305 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:41 crc kubenswrapper[4698]: E1014 09:58:41.016467 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.016315 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.016601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:41 crc kubenswrapper[4698]: E1014 09:58:41.016659 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:41 crc kubenswrapper[4698]: E1014 09:58:41.016827 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.066536 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.066607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.066650 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.066675 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.066694 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.169219 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.169268 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.169280 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.169301 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.169313 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.272087 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.272153 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.272166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.272188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.272209 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.374927 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.374995 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.375012 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.375041 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.375061 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.478299 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.478342 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.478352 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.478369 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.478381 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.581353 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.581409 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.581421 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.581442 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.581456 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.684506 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.684559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.684569 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.684592 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.684605 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.788032 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.788133 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.788163 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.788202 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.788227 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.890811 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.890880 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.890899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.890926 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.890950 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.993078 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.993122 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.993134 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.993150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:41 crc kubenswrapper[4698]: I1014 09:58:41.993162 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:41Z","lastTransitionTime":"2025-10-14T09:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.016611 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:42 crc kubenswrapper[4698]: E1014 09:58:42.016730 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.096820 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.096887 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.096899 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.096915 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.096926 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.199606 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.199649 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.199659 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.199674 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.199684 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.304263 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.304324 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.304338 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.304359 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.304463 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.407868 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.407946 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.407969 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.407996 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.408014 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.511571 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.511640 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.511651 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.511671 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.511683 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.615930 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.615977 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.615989 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.616009 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.616022 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.719739 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.719875 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.719902 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.719936 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.719958 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.823433 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.823504 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.823520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.823548 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.823567 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.926117 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.926179 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.926193 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.926213 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:42 crc kubenswrapper[4698]: I1014 09:58:42.926225 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:42Z","lastTransitionTime":"2025-10-14T09:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.016241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.016359 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:43 crc kubenswrapper[4698]: E1014 09:58:43.016487 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.016573 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:43 crc kubenswrapper[4698]: E1014 09:58:43.016657 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:43 crc kubenswrapper[4698]: E1014 09:58:43.017020 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.029304 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.029378 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.029401 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.029435 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.029458 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.133079 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.133150 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.133166 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.133188 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.133201 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.237216 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.237265 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.237274 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.237289 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.237300 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.340025 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.340083 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.340098 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.340120 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.340138 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.443446 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.443502 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.443520 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.443544 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.443561 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.546438 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.546492 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.546501 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.546518 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.546529 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.649559 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.649607 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.649618 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.649634 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.649646 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.699947 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.700004 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.700024 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.700050 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.700085 4698 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T09:58:43Z","lastTransitionTime":"2025-10-14T09:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.759627 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd"] Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.760075 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.763163 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.763395 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.763617 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.763819 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.821510 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.821483577 podStartE2EDuration="3.821483577s" podCreationTimestamp="2025-10-14 09:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:43.81948844 +0000 UTC m=+105.516787866" watchObservedRunningTime="2025-10-14 09:58:43.821483577 +0000 UTC m=+105.518783033" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.825175 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.825246 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.825309 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.825387 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.825526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.926865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.926947 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.927005 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.927058 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.927101 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.927459 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.927473 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.929161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.935178 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:43 crc kubenswrapper[4698]: I1014 09:58:43.950842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53495df-fe70-4c9c-b071-dbdf13f9f3ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srhgd\" (UID: \"c53495df-fe70-4c9c-b071-dbdf13f9f3ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:44 crc kubenswrapper[4698]: I1014 09:58:44.016339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:44 crc kubenswrapper[4698]: E1014 09:58:44.016529 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:44 crc kubenswrapper[4698]: I1014 09:58:44.018058 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:58:44 crc kubenswrapper[4698]: E1014 09:58:44.018449 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:44 crc kubenswrapper[4698]: I1014 09:58:44.086873 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" Oct 14 09:58:44 crc kubenswrapper[4698]: I1014 09:58:44.648316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" event={"ID":"c53495df-fe70-4c9c-b071-dbdf13f9f3ef","Type":"ContainerStarted","Data":"f210790d82564029a6364d0c38c3dd9fac1947acacaf49d46f1e71787c35a7d7"} Oct 14 09:58:44 crc kubenswrapper[4698]: I1014 09:58:44.649023 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" event={"ID":"c53495df-fe70-4c9c-b071-dbdf13f9f3ef","Type":"ContainerStarted","Data":"588e81190972569deb34ad51c70226f83a3421978af1b72847248f76ccaa3354"} Oct 14 09:58:45 crc kubenswrapper[4698]: I1014 09:58:45.016901 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:45 crc kubenswrapper[4698]: I1014 09:58:45.016989 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:45 crc kubenswrapper[4698]: E1014 09:58:45.017093 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:45 crc kubenswrapper[4698]: E1014 09:58:45.017232 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:45 crc kubenswrapper[4698]: I1014 09:58:45.017418 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:45 crc kubenswrapper[4698]: E1014 09:58:45.017532 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:46 crc kubenswrapper[4698]: I1014 09:58:46.016721 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:46 crc kubenswrapper[4698]: E1014 09:58:46.016925 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:47 crc kubenswrapper[4698]: I1014 09:58:47.016140 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:47 crc kubenswrapper[4698]: I1014 09:58:47.016181 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:47 crc kubenswrapper[4698]: I1014 09:58:47.016215 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:47 crc kubenswrapper[4698]: E1014 09:58:47.016344 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:47 crc kubenswrapper[4698]: E1014 09:58:47.016407 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:47 crc kubenswrapper[4698]: E1014 09:58:47.016467 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:48 crc kubenswrapper[4698]: I1014 09:58:48.016429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:48 crc kubenswrapper[4698]: E1014 09:58:48.016630 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:49 crc kubenswrapper[4698]: I1014 09:58:49.016241 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:49 crc kubenswrapper[4698]: I1014 09:58:49.016327 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:49 crc kubenswrapper[4698]: I1014 09:58:49.016369 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:49 crc kubenswrapper[4698]: E1014 09:58:49.019456 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:49 crc kubenswrapper[4698]: E1014 09:58:49.019719 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:49 crc kubenswrapper[4698]: E1014 09:58:49.019906 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:50 crc kubenswrapper[4698]: I1014 09:58:50.016982 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:50 crc kubenswrapper[4698]: E1014 09:58:50.017206 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:51 crc kubenswrapper[4698]: I1014 09:58:51.016712 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:51 crc kubenswrapper[4698]: E1014 09:58:51.016898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:51 crc kubenswrapper[4698]: I1014 09:58:51.017346 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:51 crc kubenswrapper[4698]: E1014 09:58:51.018381 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:51 crc kubenswrapper[4698]: I1014 09:58:51.019014 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:51 crc kubenswrapper[4698]: E1014 09:58:51.019166 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.016650 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:52 crc kubenswrapper[4698]: E1014 09:58:52.017173 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.680175 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/1.log" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.680754 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/0.log" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.680830 4698 generic.go:334] "Generic (PLEG): container finished" podID="fbf10bbc-318d-4f46-83a0-fdbad9888201" containerID="52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522" exitCode=1 Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.680866 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerDied","Data":"52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522"} Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.680909 4698 scope.go:117] "RemoveContainer" containerID="4a0b9fe188f41927a342dc4928eaa5f27d151d9901af15f33ce252bdbfbb3be6" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.681964 4698 scope.go:117] "RemoveContainer" containerID="52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522" Oct 14 09:58:52 crc kubenswrapper[4698]: E1014 09:58:52.682270 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b7cbk_openshift-multus(fbf10bbc-318d-4f46-83a0-fdbad9888201)\"" pod="openshift-multus/multus-b7cbk" podUID="fbf10bbc-318d-4f46-83a0-fdbad9888201" Oct 14 09:58:52 crc kubenswrapper[4698]: I1014 09:58:52.712599 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srhgd" podStartSLOduration=95.712569434 podStartE2EDuration="1m35.712569434s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:58:44.675280726 +0000 UTC m=+106.372580162" watchObservedRunningTime="2025-10-14 09:58:52.712569434 +0000 UTC m=+114.409868880" Oct 14 09:58:53 crc kubenswrapper[4698]: I1014 09:58:53.016713 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:53 crc kubenswrapper[4698]: I1014 09:58:53.016880 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:53 crc kubenswrapper[4698]: I1014 09:58:53.016902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:53 crc kubenswrapper[4698]: E1014 09:58:53.017054 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:53 crc kubenswrapper[4698]: E1014 09:58:53.017179 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:53 crc kubenswrapper[4698]: E1014 09:58:53.017307 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:53 crc kubenswrapper[4698]: I1014 09:58:53.688120 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/1.log" Oct 14 09:58:54 crc kubenswrapper[4698]: I1014 09:58:54.017089 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:54 crc kubenswrapper[4698]: E1014 09:58:54.017350 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:55 crc kubenswrapper[4698]: I1014 09:58:55.015881 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:55 crc kubenswrapper[4698]: I1014 09:58:55.015933 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:55 crc kubenswrapper[4698]: I1014 09:58:55.015891 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:55 crc kubenswrapper[4698]: E1014 09:58:55.016039 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:55 crc kubenswrapper[4698]: E1014 09:58:55.016133 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:55 crc kubenswrapper[4698]: E1014 09:58:55.016609 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:55 crc kubenswrapper[4698]: I1014 09:58:55.017076 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:58:55 crc kubenswrapper[4698]: E1014 09:58:55.017299 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hspfz_openshift-ovn-kubernetes(d02f5359-81fc-4261-b995-e58c78bcec0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" Oct 14 09:58:56 crc kubenswrapper[4698]: I1014 09:58:56.016429 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:56 crc kubenswrapper[4698]: E1014 09:58:56.016565 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:57 crc kubenswrapper[4698]: I1014 09:58:57.016715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:57 crc kubenswrapper[4698]: E1014 09:58:57.016891 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:57 crc kubenswrapper[4698]: I1014 09:58:57.016731 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:57 crc kubenswrapper[4698]: I1014 09:58:57.016722 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:57 crc kubenswrapper[4698]: E1014 09:58:57.016966 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:57 crc kubenswrapper[4698]: E1014 09:58:57.017123 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:58 crc kubenswrapper[4698]: I1014 09:58:58.016892 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:58:58 crc kubenswrapper[4698]: E1014 09:58:58.017085 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:58:59 crc kubenswrapper[4698]: E1014 09:58:59.014391 4698 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 14 09:58:59 crc kubenswrapper[4698]: I1014 09:58:59.016615 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:58:59 crc kubenswrapper[4698]: I1014 09:58:59.016724 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:58:59 crc kubenswrapper[4698]: E1014 09:58:59.018076 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:58:59 crc kubenswrapper[4698]: I1014 09:58:59.018108 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:58:59 crc kubenswrapper[4698]: E1014 09:58:59.018320 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:58:59 crc kubenswrapper[4698]: E1014 09:58:59.018361 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:58:59 crc kubenswrapper[4698]: E1014 09:58:59.107392 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 09:59:00 crc kubenswrapper[4698]: I1014 09:59:00.016446 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:00 crc kubenswrapper[4698]: E1014 09:59:00.016641 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:01 crc kubenswrapper[4698]: I1014 09:59:01.015989 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:01 crc kubenswrapper[4698]: I1014 09:59:01.016105 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:01 crc kubenswrapper[4698]: E1014 09:59:01.016153 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:59:01 crc kubenswrapper[4698]: I1014 09:59:01.016293 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:01 crc kubenswrapper[4698]: E1014 09:59:01.016465 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:59:01 crc kubenswrapper[4698]: E1014 09:59:01.016733 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:59:02 crc kubenswrapper[4698]: I1014 09:59:02.016892 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:02 crc kubenswrapper[4698]: E1014 09:59:02.017274 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:03 crc kubenswrapper[4698]: I1014 09:59:03.016431 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:03 crc kubenswrapper[4698]: E1014 09:59:03.016565 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:59:03 crc kubenswrapper[4698]: I1014 09:59:03.016434 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:03 crc kubenswrapper[4698]: E1014 09:59:03.016642 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:59:03 crc kubenswrapper[4698]: I1014 09:59:03.016426 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:03 crc kubenswrapper[4698]: E1014 09:59:03.016704 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:59:04 crc kubenswrapper[4698]: I1014 09:59:04.016265 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:04 crc kubenswrapper[4698]: E1014 09:59:04.016419 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:04 crc kubenswrapper[4698]: E1014 09:59:04.108562 4698 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 09:59:05 crc kubenswrapper[4698]: I1014 09:59:05.016050 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:05 crc kubenswrapper[4698]: E1014 09:59:05.016418 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:59:05 crc kubenswrapper[4698]: I1014 09:59:05.016574 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:05 crc kubenswrapper[4698]: E1014 09:59:05.016975 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:59:05 crc kubenswrapper[4698]: I1014 09:59:05.016598 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:05 crc kubenswrapper[4698]: E1014 09:59:05.017158 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.016959 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.017699 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.017863 4698 scope.go:117] "RemoveContainer" containerID="52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522" Oct 14 09:59:06 crc kubenswrapper[4698]: E1014 09:59:06.017980 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.739252 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/3.log" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.742320 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerStarted","Data":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.742755 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.744253 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/1.log" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.744292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerStarted","Data":"229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe"} Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.786536 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podStartSLOduration=108.786517528 podStartE2EDuration="1m48.786517528s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:06.785679214 +0000 UTC m=+128.482978670" watchObservedRunningTime="2025-10-14 09:59:06.786517528 +0000 UTC m=+128.483816954" Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.925671 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jbpnj"] Oct 14 09:59:06 crc kubenswrapper[4698]: I1014 09:59:06.925803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:06 crc kubenswrapper[4698]: E1014 09:59:06.925885 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:07 crc kubenswrapper[4698]: I1014 09:59:07.016504 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:07 crc kubenswrapper[4698]: I1014 09:59:07.016515 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:07 crc kubenswrapper[4698]: E1014 09:59:07.016677 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:59:07 crc kubenswrapper[4698]: I1014 09:59:07.016543 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:07 crc kubenswrapper[4698]: E1014 09:59:07.016754 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:59:07 crc kubenswrapper[4698]: E1014 09:59:07.016868 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:59:09 crc kubenswrapper[4698]: I1014 09:59:09.016208 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:09 crc kubenswrapper[4698]: I1014 09:59:09.019374 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:09 crc kubenswrapper[4698]: I1014 09:59:09.019418 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:09 crc kubenswrapper[4698]: I1014 09:59:09.019410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:09 crc kubenswrapper[4698]: E1014 09:59:09.019599 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 09:59:09 crc kubenswrapper[4698]: E1014 09:59:09.019799 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 09:59:09 crc kubenswrapper[4698]: E1014 09:59:09.019538 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jbpnj" podUID="41f5ac86-35f8-416c-bbfe-1e182975ec5c" Oct 14 09:59:09 crc kubenswrapper[4698]: E1014 09:59:09.019925 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.017071 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.017172 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.017192 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.017299 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021058 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021109 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021233 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021273 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021393 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 09:59:11 crc kubenswrapper[4698]: I1014 09:59:11.021880 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.849018 4698 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.924518 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jg26j"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.925190 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.928290 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.929357 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.931745 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.932136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.932245 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.933177 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.937667 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.937685 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.937852 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.938337 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.943269 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-58d6k"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.943837 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.944919 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmxzf"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.945456 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.945905 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.946496 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.946577 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.947155 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.948208 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.948803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.949923 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5x56z"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.950859 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.952019 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.952493 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.953430 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.953895 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.957811 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hshkc"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.958213 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-79s46"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.958509 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.958791 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.959144 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.959683 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.964464 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlnpq"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.965415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.967729 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.968372 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.969144 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.969682 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.973215 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.973323 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.974095 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.974500 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.974599 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.974691 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.974859 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.975637 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.975731 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.975832 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.975921 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976002 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976123 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976399 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976485 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976610 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976788 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.976886 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.977009 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.977605 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jg26j"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.977640 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.978219 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.987948 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.988219 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.988828 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 14 09:59:14 crc kubenswrapper[4698]: I1014 09:59:14.989763 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:14.999976 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7mmrz"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.003534 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.010617 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.012656 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.012750 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.012926 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013131 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013186 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013204 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013477 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013571 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013712 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013849 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f74f9f-3784-4bb5-8fcf-acc6d625f363-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013632 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013481 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014052 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013920 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzph5\" (UniqueName: \"kubernetes.io/projected/10f74f9f-3784-4bb5-8fcf-acc6d625f363-kube-api-access-gzph5\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013668 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014123 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013936 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014005 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014218 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.013488 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014403 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014494 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014597 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014683 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014704 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014813 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014826 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.014901 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015166 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015176 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015221 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015266 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015550 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015590 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015635 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015735 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015752 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015867 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015962 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.015970 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.016107 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.016157 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.016605 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.019855 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.020114 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.020340 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.020520 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.020788 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.021037 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.021073 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.021240 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.021328 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.021866 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.023227 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.023381 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.023735 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.023991 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.024168 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.025282 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.026418 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.026754 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.029191 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.029572 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.030828 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.032527 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.034166 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.036489 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.041825 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.058507 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.094302 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.094344 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.094935 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.095187 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.095578 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.095865 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.096114 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.096326 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.096363 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.096603 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.096748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.097001 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.097205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.097236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.097243 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.097258 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098161 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098421 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098626 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098688 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7fm6f"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098748 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.098841 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.099326 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.099386 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.099647 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qprfx"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.099711 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.099979 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.102611 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.103244 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.104412 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.104968 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.105528 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.105548 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8xtm"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.105752 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.106112 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.110146 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.110401 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.112933 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.113633 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114042 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114048 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjgjr"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114551 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114629 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114662 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114796 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.114982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f74f9f-3784-4bb5-8fcf-acc6d625f363-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.115056 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzph5\" (UniqueName: \"kubernetes.io/projected/10f74f9f-3784-4bb5-8fcf-acc6d625f363-kube-api-access-gzph5\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.117554 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.117706 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-c9jqn"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.117567 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.124023 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.124592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f74f9f-3784-4bb5-8fcf-acc6d625f363-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.127184 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.127403 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w8pmp"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.128042 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5x56z"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.128128 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.128589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.138295 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-58d6k"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.139722 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.140545 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmxzf"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.141818 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7mmrz"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.143428 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.151063 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.151125 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.156984 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.158921 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.166288 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.166715 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.167744 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.169153 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.170970 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.172698 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.174486 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.175460 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlnpq"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.179867 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hshkc"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.179955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.181950 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.183182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7fm6f"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.184565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.185731 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.187209 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.187271 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nt998"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.188505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nt998" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.188942 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.190663 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kvctw"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.191914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.192034 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.193085 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.194688 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.196089 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.197301 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8xtm"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.198974 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.200210 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kvctw"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.201378 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjgjr"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.203063 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.204614 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nt998"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.205888 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.207360 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.207863 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.208328 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.209577 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w8pmp"] Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.227218 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.266915 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.286713 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.307117 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.327469 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.348232 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.367586 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.388430 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.408193 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.428365 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.448492 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.468421 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.488067 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.508303 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.528468 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.549396 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.569456 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.588019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.608039 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.628021 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.648655 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.667376 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.688125 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.709210 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.728608 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.747932 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.768962 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.788920 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.807300 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.828558 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.848361 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.868975 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.888369 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.908219 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.928192 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.947269 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.978576 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 14 09:59:15 crc kubenswrapper[4698]: I1014 09:59:15.988421 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.007920 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.027741 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.048049 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.068108 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.088236 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.106458 4698 request.go:700] Waited for 1.006133122s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.108665 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.127857 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.148706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.168725 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.207551 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.226702 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.226866 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.226961 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227053 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-encryption-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227149 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227178 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227207 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t266x\" (UniqueName: \"kubernetes.io/projected/0704231d-de7e-4317-80bd-9edbb5a0de5f-kube-api-access-t266x\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69fe44d1-6015-4929-b197-0ea5f0167131-machine-approver-tls\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227314 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk26\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-kube-api-access-qrk26\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14729fe2-c0c5-49b8-9766-b35a97d66e8d-metrics-tls\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a325fe-116c-49f3-bbfc-0ea7c688e3df-trusted-ca\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227493 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227524 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1d5afbbe-a0bd-492f-8b7d-691208ef27db-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227583 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7rb\" (UniqueName: \"kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227715 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-image-import-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227755 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw22h\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68bf4f74-6117-4975-8c5f-b5b35b97c787-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227835 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227853 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227882 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit-dir\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55l4\" (UniqueName: \"kubernetes.io/projected/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-kube-api-access-g55l4\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.227968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-service-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228101 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33a325fe-116c-49f3-bbfc-0ea7c688e3df-metrics-tls\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdgf\" (UniqueName: \"kubernetes.io/projected/69fe44d1-6015-4929-b197-0ea5f0167131-kube-api-access-fzdgf\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228429 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmtx\" (UniqueName: \"kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228735 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228814 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.228959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229054 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-serving-cert\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229178 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-config\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229223 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-node-pullsecrets\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229288 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77041b5d-f53d-425c-b824-a61833af677c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229411 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-serving-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229502 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthwp\" (UniqueName: \"kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229549 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvpt\" (UniqueName: \"kubernetes.io/projected/1d5afbbe-a0bd-492f-8b7d-691208ef27db-kube-api-access-4tvpt\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229636 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.229871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.229967 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:16.729938478 +0000 UTC m=+138.427237944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230023 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnv2\" (UniqueName: \"kubernetes.io/projected/14729fe2-c0c5-49b8-9766-b35a97d66e8d-kube-api-access-2jnv2\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzrh\" (UniqueName: \"kubernetes.io/projected/2bc82783-cf82-45df-94d5-60de2f1a0bdf-kube-api-access-zhzrh\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfk4\" (UniqueName: \"kubernetes.io/projected/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-kube-api-access-btfk4\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230621 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-config\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.230939 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231036 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxvq\" (UniqueName: \"kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231157 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-auth-proxy-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-trusted-ca\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231368 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d671e120-cf7e-4363-b023-dc46b51ea073-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231437 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-client\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231505 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkr7\" (UniqueName: \"kubernetes.io/projected/77041b5d-f53d-425c-b824-a61833af677c-kube-api-access-njkr7\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231541 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231608 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231640 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-images\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-config\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231910 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.231991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232131 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5afbbe-a0bd-492f-8b7d-691208ef27db-serving-cert\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232255 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgxj\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-kube-api-access-wdgxj\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232297 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-serving-cert\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d671e120-cf7e-4363-b023-dc46b51ea073-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232368 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wmh\" (UniqueName: \"kubernetes.io/projected/d671e120-cf7e-4363-b023-dc46b51ea073-kube-api-access-d7wmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68bf4f74-6117-4975-8c5f-b5b35b97c787-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.232490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0704231d-de7e-4317-80bd-9edbb5a0de5f-serving-cert\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.247718 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.268377 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.288434 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.307928 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.327076 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.333904 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.334057 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:16.834024757 +0000 UTC m=+138.531324203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334167 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f74db80-958b-4799-864f-792892d9903e-config\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334201 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-serving-cert\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334248 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b71de96-0379-46a7-af50-9b831b50268b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-config\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-node-pullsecrets\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334461 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/193ddda3-4410-402c-a198-33ff7ea3a740-service-ca-bundle\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334562 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334619 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-node-pullsecrets\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334660 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9t95\" (UniqueName: \"kubernetes.io/projected/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-kube-api-access-x9t95\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.334829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-etcd-client\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.335000 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:16.834832261 +0000 UTC m=+138.532131777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335124 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jnv2\" (UniqueName: \"kubernetes.io/projected/14729fe2-c0c5-49b8-9766-b35a97d66e8d-kube-api-access-2jnv2\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335437 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-serving-cert\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335568 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d42df0-f10f-4f75-8481-adb4d51c341f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-images\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7fv2\" (UniqueName: \"kubernetes.io/projected/d11979c8-404a-4ab4-9d27-814013edd944-kube-api-access-d7fv2\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335884 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxvq\" (UniqueName: \"kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335917 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.335980 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-client\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336012 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tr29\" (UniqueName: \"kubernetes.io/projected/c5df8277-9e0d-4d53-ad20-20a07ceb9515-kube-api-access-6tr29\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336043 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkr7\" (UniqueName: \"kubernetes.io/projected/77041b5d-f53d-425c-b824-a61833af677c-kube-api-access-njkr7\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336108 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336137 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336168 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-service-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-encryption-config\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336248 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpskz\" (UniqueName: \"kubernetes.io/projected/012a71ed-3195-49f8-bde1-f5455806e0f0-kube-api-access-cpskz\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336438 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-config\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.336559 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7jk\" (UniqueName: \"kubernetes.io/projected/818edbba-2627-4978-8a34-005689059b24-kube-api-access-4h7jk\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.338912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g579\" (UniqueName: \"kubernetes.io/projected/41f1faca-9336-4fb7-85a7-14541f2cf578-kube-api-access-8g579\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.339000 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbpw\" (UniqueName: \"kubernetes.io/projected/9b71de96-0379-46a7-af50-9b831b50268b-kube-api-access-8tbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.339105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5afbbe-a0bd-492f-8b7d-691208ef27db-serving-cert\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.339171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgxj\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-kube-api-access-wdgxj\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.339229 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4886c701-aad2-4ae4-bb99-0221728df342-audit-dir\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.339497 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-key\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d671e120-cf7e-4363-b023-dc46b51ea073-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341139 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wmh\" (UniqueName: \"kubernetes.io/projected/d671e120-cf7e-4363-b023-dc46b51ea073-kube-api-access-d7wmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341191 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-etcd-client\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc1091b3-dceb-44a6-95b0-8048efec8032-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.341264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-cabundle\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.342621 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343119 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68bf4f74-6117-4975-8c5f-b5b35b97c787-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-node-bootstrap-token\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343567 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-srv-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343608 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94g47\" (UniqueName: \"kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343672 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.343934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344172 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344260 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344347 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-client\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344513 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-encryption-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344826 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344835 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.344995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwlfw\" (UniqueName: \"kubernetes.io/projected/0bf22386-43f0-4d64-abb0-cdec28434502-kube-api-access-mwlfw\") pod \"downloads-7954f5f757-7fm6f\" (UID: \"0bf22386-43f0-4d64-abb0-cdec28434502\") " pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345161 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-default-certificate\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345373 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4f9\" (UniqueName: \"kubernetes.io/projected/bc1091b3-dceb-44a6-95b0-8048efec8032-kube-api-access-nc4f9\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345461 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14729fe2-c0c5-49b8-9766-b35a97d66e8d-metrics-tls\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-serving-cert\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345550 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41f1faca-9336-4fb7-85a7-14541f2cf578-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1091b3-dceb-44a6-95b0-8048efec8032-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.345635 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22026c45-f849-4069-b5ad-4bc34d0ea6eb-metrics-tls\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.346280 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.346325 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68bf4f74-6117-4975-8c5f-b5b35b97c787-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.346439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a325fe-116c-49f3-bbfc-0ea7c688e3df-trusted-ca\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.346507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bn9f\" (UniqueName: \"kubernetes.io/projected/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-kube-api-access-2bn9f\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.346729 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-socket-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86cx\" (UniqueName: \"kubernetes.io/projected/4bc0ae50-422f-4bd4-abba-008f5ca0467f-kube-api-access-x86cx\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347307 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347412 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347618 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-certs\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.347993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68bf4f74-6117-4975-8c5f-b5b35b97c787-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348069 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xk5\" (UniqueName: \"kubernetes.io/projected/a3f47cc4-719c-4e84-871b-ed52e9660cdb-kube-api-access-c6xk5\") pod \"migrator-59844c95c7-mddmt\" (UID: \"a3f47cc4-719c-4e84-871b-ed52e9660cdb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55l4\" (UniqueName: \"kubernetes.io/projected/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-kube-api-access-g55l4\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348320 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-service-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit-dir\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348486 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4r2m\" (UniqueName: \"kubernetes.io/projected/193ddda3-4410-402c-a198-33ff7ea3a740-kube-api-access-z4r2m\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f74db80-958b-4799-864f-792892d9903e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348608 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348758 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33a325fe-116c-49f3-bbfc-0ea7c688e3df-metrics-tls\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.348846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmtx\" (UniqueName: \"kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349094 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-service-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349139 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349226 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz9m8\" (UniqueName: \"kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349426 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit-dir\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349529 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349608 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-config\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349690 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77041b5d-f53d-425c-b824-a61833af677c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349730 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22026c45-f849-4069-b5ad-4bc34d0ea6eb-config-volume\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349790 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-serving-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthwp\" (UniqueName: \"kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349878 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-srv-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350021 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvpt\" (UniqueName: \"kubernetes.io/projected/1d5afbbe-a0bd-492f-8b7d-691208ef27db-kube-api-access-4tvpt\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-encryption-config\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350055 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-stats-auth\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350098 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzrh\" (UniqueName: \"kubernetes.io/projected/2bc82783-cf82-45df-94d5-60de2f1a0bdf-kube-api-access-zhzrh\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.349434 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.350610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.351119 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.351255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-serving-cert\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.352455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5afbbe-a0bd-492f-8b7d-691208ef27db-serving-cert\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.352846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.353095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/68bf4f74-6117-4975-8c5f-b5b35b97c787-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.353115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33a325fe-116c-49f3-bbfc-0ea7c688e3df-trusted-ca\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.353733 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.353818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.353982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354033 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btfk4\" (UniqueName: \"kubernetes.io/projected/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-kube-api-access-btfk4\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-registration-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354108 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-proxy-tls\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354143 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-plugins-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4bc0ae50-422f-4bd4-abba-008f5ca0467f-tmpfs\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d671e120-cf7e-4363-b023-dc46b51ea073-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354188 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-etcd-serving-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354246 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-config\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354700 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14729fe2-c0c5-49b8-9766-b35a97d66e8d-metrics-tls\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.354892 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355168 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwf98\" (UniqueName: \"kubernetes.io/projected/4886c701-aad2-4ae4-bb99-0221728df342-kube-api-access-nwf98\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355347 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-auth-proxy-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-trusted-ca\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cfr5\" (UniqueName: \"kubernetes.io/projected/9103615a-2665-4033-8114-259b0e56879f-kube-api-access-6cfr5\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355692 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-config\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c6c34c-666c-41e3-8c36-5ac3578ff330-config\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355928 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d671e120-cf7e-4363-b023-dc46b51ea073-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.355991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswhc\" (UniqueName: \"kubernetes.io/projected/b8186130-0e09-455d-92c7-05e4c0af37de-kube-api-access-sswhc\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356142 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356208 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356244 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-images\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356274 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-config\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.356430 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-audit-policies\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357730 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-audit\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0704231d-de7e-4317-80bd-9edbb5a0de5f-trusted-ca\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69fe44d1-6015-4929-b197-0ea5f0167131-auth-proxy-config\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357981 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33a325fe-116c-49f3-bbfc-0ea7c688e3df-metrics-tls\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.357990 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41d42df0-f10f-4f75-8481-adb4d51c341f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-images\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-webhook-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358197 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-serving-cert\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358243 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-profile-collector-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d42df0-f10f-4f75-8481-adb4d51c341f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358654 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-apiservice-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77041b5d-f53d-425c-b824-a61833af677c-config\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358964 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0704231d-de7e-4317-80bd-9edbb5a0de5f-serving-cert\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-metrics-certs\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.358716 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d671e120-cf7e-4363-b023-dc46b51ea073-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359090 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm6r\" (UniqueName: \"kubernetes.io/projected/22026c45-f849-4069-b5ad-4bc34d0ea6eb-kube-api-access-zvm6r\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359201 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b71de96-0379-46a7-af50-9b831b50268b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359200 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359236 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359357 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-config\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359434 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrk26\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-kube-api-access-qrk26\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t266x\" (UniqueName: \"kubernetes.io/projected/0704231d-de7e-4317-80bd-9edbb5a0de5f-kube-api-access-t266x\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/818edbba-2627-4978-8a34-005689059b24-proxy-tls\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359538 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-mountpoint-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69fe44d1-6015-4929-b197-0ea5f0167131-machine-approver-tls\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359625 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359659 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d11979c8-404a-4ab4-9d27-814013edd944-cert\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jjq\" (UniqueName: \"kubernetes.io/projected/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-kube-api-access-26jjq\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359916 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359967 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-csi-data-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7rb\" (UniqueName: \"kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360112 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360168 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360190 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360220 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1d5afbbe-a0bd-492f-8b7d-691208ef27db-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.359470 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw22h\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77041b5d-f53d-425c-b824-a61833af677c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360678 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-image-import-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvd2\" (UniqueName: \"kubernetes.io/projected/487b5c84-fe72-4b1c-8afa-15681f3d2c34-kube-api-access-4pvd2\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360827 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.360883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wkc\" (UniqueName: \"kubernetes.io/projected/72c6c34c-666c-41e3-8c36-5ac3578ff330-kube-api-access-76wkc\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361018 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzdgf\" (UniqueName: \"kubernetes.io/projected/69fe44d1-6015-4929-b197-0ea5f0167131-kube-api-access-fzdgf\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361089 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f74db80-958b-4799-864f-792892d9903e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361126 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c6c34c-666c-41e3-8c36-5ac3578ff330-serving-cert\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361162 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.361431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1d5afbbe-a0bd-492f-8b7d-691208ef27db-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.362501 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.363245 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2bc82783-cf82-45df-94d5-60de2f1a0bdf-image-import-ca\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.363579 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc82783-cf82-45df-94d5-60de2f1a0bdf-serving-cert\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.363590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.365348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.365393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.365706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69fe44d1-6015-4929-b197-0ea5f0167131-machine-approver-tls\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.366287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.366795 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.367894 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.368031 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.368247 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0704231d-de7e-4317-80bd-9edbb5a0de5f-serving-cert\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.388452 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.407945 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.435996 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.448253 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.462472 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.462671 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:16.962626477 +0000 UTC m=+138.659925933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.462746 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.462944 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/818edbba-2627-4978-8a34-005689059b24-proxy-tls\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.462992 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-mountpoint-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d11979c8-404a-4ab4-9d27-814013edd944-cert\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463069 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jjq\" (UniqueName: \"kubernetes.io/projected/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-kube-api-access-26jjq\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463150 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-csi-data-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463200 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvd2\" (UniqueName: \"kubernetes.io/projected/487b5c84-fe72-4b1c-8afa-15681f3d2c34-kube-api-access-4pvd2\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76wkc\" (UniqueName: \"kubernetes.io/projected/72c6c34c-666c-41e3-8c36-5ac3578ff330-kube-api-access-76wkc\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-mountpoint-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463315 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f74db80-958b-4799-864f-792892d9903e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c6c34c-666c-41e3-8c36-5ac3578ff330-serving-cert\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-csi-data-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463730 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f74db80-958b-4799-864f-792892d9903e-config\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463808 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463858 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b71de96-0379-46a7-af50-9b831b50268b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463892 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/193ddda3-4410-402c-a198-33ff7ea3a740-service-ca-bundle\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463951 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9t95\" (UniqueName: \"kubernetes.io/projected/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-kube-api-access-x9t95\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.463992 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-etcd-client\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d42df0-f10f-4f75-8481-adb4d51c341f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464044 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-serving-cert\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-images\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7fv2\" (UniqueName: \"kubernetes.io/projected/d11979c8-404a-4ab4-9d27-814013edd944-kube-api-access-d7fv2\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tr29\" (UniqueName: \"kubernetes.io/projected/c5df8277-9e0d-4d53-ad20-20a07ceb9515-kube-api-access-6tr29\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464168 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-encryption-config\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464190 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-service-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464209 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpskz\" (UniqueName: \"kubernetes.io/projected/012a71ed-3195-49f8-bde1-f5455806e0f0-kube-api-access-cpskz\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464266 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7jk\" (UniqueName: \"kubernetes.io/projected/818edbba-2627-4978-8a34-005689059b24-kube-api-access-4h7jk\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464300 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g579\" (UniqueName: \"kubernetes.io/projected/41f1faca-9336-4fb7-85a7-14541f2cf578-kube-api-access-8g579\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464345 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tbpw\" (UniqueName: \"kubernetes.io/projected/9b71de96-0379-46a7-af50-9b831b50268b-kube-api-access-8tbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464384 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-etcd-client\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464415 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc1091b3-dceb-44a6-95b0-8048efec8032-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4886c701-aad2-4ae4-bb99-0221728df342-audit-dir\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-key\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464499 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-node-bootstrap-token\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-cabundle\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464593 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-srv-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464625 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94g47\" (UniqueName: \"kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwlfw\" (UniqueName: \"kubernetes.io/projected/0bf22386-43f0-4d64-abb0-cdec28434502-kube-api-access-mwlfw\") pod \"downloads-7954f5f757-7fm6f\" (UID: \"0bf22386-43f0-4d64-abb0-cdec28434502\") " pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464715 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-default-certificate\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-serving-cert\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464816 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4f9\" (UniqueName: \"kubernetes.io/projected/bc1091b3-dceb-44a6-95b0-8048efec8032-kube-api-access-nc4f9\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41f1faca-9336-4fb7-85a7-14541f2cf578-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464882 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1091b3-dceb-44a6-95b0-8048efec8032-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22026c45-f849-4069-b5ad-4bc34d0ea6eb-metrics-tls\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464921 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-socket-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86cx\" (UniqueName: \"kubernetes.io/projected/4bc0ae50-422f-4bd4-abba-008f5ca0467f-kube-api-access-x86cx\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bn9f\" (UniqueName: \"kubernetes.io/projected/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-kube-api-access-2bn9f\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.464994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-certs\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xk5\" (UniqueName: \"kubernetes.io/projected/a3f47cc4-719c-4e84-871b-ed52e9660cdb-kube-api-access-c6xk5\") pod \"migrator-59844c95c7-mddmt\" (UID: \"a3f47cc4-719c-4e84-871b-ed52e9660cdb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4r2m\" (UniqueName: \"kubernetes.io/projected/193ddda3-4410-402c-a198-33ff7ea3a740-kube-api-access-z4r2m\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f74db80-958b-4799-864f-792892d9903e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465135 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz9m8\" (UniqueName: \"kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465198 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-config\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465292 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22026c45-f849-4069-b5ad-4bc34d0ea6eb-config-volume\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-srv-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465406 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-stats-auth\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-registration-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465485 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-plugins-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4bc0ae50-422f-4bd4-abba-008f5ca0467f-tmpfs\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-proxy-tls\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465577 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465607 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwf98\" (UniqueName: \"kubernetes.io/projected/4886c701-aad2-4ae4-bb99-0221728df342-kube-api-access-nwf98\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465643 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cfr5\" (UniqueName: \"kubernetes.io/projected/9103615a-2665-4033-8114-259b0e56879f-kube-api-access-6cfr5\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465672 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c6c34c-666c-41e3-8c36-5ac3578ff330-config\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465751 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sswhc\" (UniqueName: \"kubernetes.io/projected/b8186130-0e09-455d-92c7-05e4c0af37de-kube-api-access-sswhc\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465809 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465890 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41d42df0-f10f-4f75-8481-adb4d51c341f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-webhook-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465946 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-audit-policies\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.465982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-profile-collector-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d42df0-f10f-4f75-8481-adb4d51c341f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466045 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-apiservice-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-metrics-certs\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm6r\" (UniqueName: \"kubernetes.io/projected/22026c45-f849-4069-b5ad-4bc34d0ea6eb-kube-api-access-zvm6r\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466137 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b71de96-0379-46a7-af50-9b831b50268b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466169 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-config\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.466912 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f74db80-958b-4799-864f-792892d9903e-config\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.467214 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:16.967189571 +0000 UTC m=+138.664489027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.467412 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-images\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468113 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468138 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/818edbba-2627-4978-8a34-005689059b24-proxy-tls\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468183 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c6c34c-666c-41e3-8c36-5ac3578ff330-serving-cert\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468191 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-serving-cert\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468361 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/818edbba-2627-4978-8a34-005689059b24-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.468807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.469102 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/193ddda3-4410-402c-a198-33ff7ea3a740-service-ca-bundle\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.469155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.469913 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b71de96-0379-46a7-af50-9b831b50268b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.470093 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-plugins-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.470171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-registration-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.470483 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.470929 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-config\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472238 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-etcd-client\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d42df0-f10f-4f75-8481-adb4d51c341f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472394 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472473 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b71de96-0379-46a7-af50-9b831b50268b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472919 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d42df0-f10f-4f75-8481-adb4d51c341f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.472995 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4bc0ae50-422f-4bd4-abba-008f5ca0467f-tmpfs\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.473837 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c6c34c-666c-41e3-8c36-5ac3578ff330-config\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.475176 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/487b5c84-fe72-4b1c-8afa-15681f3d2c34-socket-dir\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.475171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.475305 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-metrics-certs\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.476445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1091b3-dceb-44a6-95b0-8048efec8032-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.477258 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4886c701-aad2-4ae4-bb99-0221728df342-encryption-config\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.477975 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4886c701-aad2-4ae4-bb99-0221728df342-audit-dir\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.478062 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-srv-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.478068 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5df8277-9e0d-4d53-ad20-20a07ceb9515-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.479189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-default-certificate\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.480149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-webhook-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.480385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/193ddda3-4410-402c-a198-33ff7ea3a740-stats-auth\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.481464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-key\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.482107 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41f1faca-9336-4fb7-85a7-14541f2cf578-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.482294 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc1091b3-dceb-44a6-95b0-8048efec8032-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.482323 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-profile-collector-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.482295 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b8186130-0e09-455d-92c7-05e4c0af37de-srv-cert\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.482978 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-proxy-tls\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.483205 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4bc0ae50-422f-4bd4-abba-008f5ca0467f-apiservice-cert\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.483545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/012a71ed-3195-49f8-bde1-f5455806e0f0-signing-cabundle\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.484437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.488587 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.488720 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f74db80-958b-4799-864f-792892d9903e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.508531 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.519717 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-serving-cert\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.527418 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.538131 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-audit-policies\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.548097 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.557836 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9103615a-2665-4033-8114-259b0e56879f-etcd-client\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.566890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.567106 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.067043976 +0000 UTC m=+138.764343402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.567442 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.568007 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.067969283 +0000 UTC m=+138.765268719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.568641 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.577650 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.587577 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.593385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4886c701-aad2-4ae4-bb99-0221728df342-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.607831 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.615647 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-config\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.627660 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.648046 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.651344 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.667261 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.668563 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.668808 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.168748136 +0000 UTC m=+138.866047582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.669344 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.669731 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.169715234 +0000 UTC m=+138.867014690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.675241 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9103615a-2665-4033-8114-259b0e56879f-etcd-service-ca\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.687344 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.708080 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.727292 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.732222 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.770717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.770940 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.270894188 +0000 UTC m=+138.968193644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.772544 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.773143 4698 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.774743 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.27472573 +0000 UTC m=+138.972025156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.778471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzph5\" (UniqueName: \"kubernetes.io/projected/10f74f9f-3784-4bb5-8fcf-acc6d625f363-kube-api-access-gzph5\") pod \"cluster-samples-operator-665b6dd947-p266j\" (UID: \"10f74f9f-3784-4bb5-8fcf-acc6d625f363\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.788290 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.808410 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.827827 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.841981 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-certs\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.847693 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.868757 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.875960 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.876118 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.37608241 +0000 UTC m=+139.073381866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.877289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.877830 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.37781008 +0000 UTC m=+139.075109526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.882225 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-node-bootstrap-token\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.888379 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.907488 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.920223 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22026c45-f849-4069-b5ad-4bc34d0ea6eb-metrics-tls\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.927976 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.933590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22026c45-f849-4069-b5ad-4bc34d0ea6eb-config-volume\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.948383 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.958406 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d11979c8-404a-4ab4-9d27-814013edd944-cert\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.967977 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.978704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.979158 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.479117668 +0000 UTC m=+139.176417124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.979819 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:16 crc kubenswrapper[4698]: E1014 09:59:16.980267 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.480247361 +0000 UTC m=+139.177546807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:16 crc kubenswrapper[4698]: I1014 09:59:16.987824 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.007555 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.071006 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.075941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jnv2\" (UniqueName: \"kubernetes.io/projected/14729fe2-c0c5-49b8-9766-b35a97d66e8d-kube-api-access-2jnv2\") pod \"dns-operator-744455d44c-mlnpq\" (UID: \"14729fe2-c0c5-49b8-9766-b35a97d66e8d\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.081840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.082169 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.582128025 +0000 UTC m=+139.279427471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.082429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.082824 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.582808916 +0000 UTC m=+139.280108342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.100336 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkr7\" (UniqueName: \"kubernetes.io/projected/77041b5d-f53d-425c-b824-a61833af677c-kube-api-access-njkr7\") pod \"machine-api-operator-5694c8668f-wmxzf\" (UID: \"77041b5d-f53d-425c-b824-a61833af677c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.109969 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.114808 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxvq\" (UniqueName: \"kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq\") pod \"controller-manager-879f6c89f-c6gg9\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.135181 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgxj\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-kube-api-access-wdgxj\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.154471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wmh\" (UniqueName: \"kubernetes.io/projected/d671e120-cf7e-4363-b023-dc46b51ea073-kube-api-access-d7wmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-24kzx\" (UID: \"d671e120-cf7e-4363-b023-dc46b51ea073\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.174530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.184618 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.184858 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.684823264 +0000 UTC m=+139.382122710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.185122 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.185549 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.685533485 +0000 UTC m=+139.382832931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.196621 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55l4\" (UniqueName: \"kubernetes.io/projected/b23ade35-ff68-4366-8b6a-9e24fcd4e0eb-kube-api-access-g55l4\") pod \"multus-admission-controller-857f4d67dd-7mmrz\" (UID: \"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.216179 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.219385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthwp\" (UniqueName: \"kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp\") pod \"oauth-openshift-558db77b4-kqg88\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.247719 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.252033 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmtx\" (UniqueName: \"kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx\") pod \"route-controller-manager-6576b87f9c-lbk6k\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.257665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvpt\" (UniqueName: \"kubernetes.io/projected/1d5afbbe-a0bd-492f-8b7d-691208ef27db-kube-api-access-4tvpt\") pod \"openshift-config-operator-7777fb866f-jg26j\" (UID: \"1d5afbbe-a0bd-492f-8b7d-691208ef27db\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.295925 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.297135 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.797111545 +0000 UTC m=+139.494410971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.297227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.297329 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.297388 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.297554 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.797544978 +0000 UTC m=+139.494844404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.304881 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.311733 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.315984 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.316427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzrh\" (UniqueName: \"kubernetes.io/projected/2bc82783-cf82-45df-94d5-60de2f1a0bdf-kube-api-access-zhzrh\") pod \"apiserver-76f77b778f-5x56z\" (UID: \"2bc82783-cf82-45df-94d5-60de2f1a0bdf\") " pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.317060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btfk4\" (UniqueName: \"kubernetes.io/projected/1b225a95-8db7-45c2-ad7c-b4bc80b6e875-kube-api-access-btfk4\") pod \"authentication-operator-69f744f599-hshkc\" (UID: \"1b225a95-8db7-45c2-ad7c-b4bc80b6e875\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.329603 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrk26\" (UniqueName: \"kubernetes.io/projected/33a325fe-116c-49f3-bbfc-0ea7c688e3df-kube-api-access-qrk26\") pod \"ingress-operator-5b745b69d9-svzm6\" (UID: \"33a325fe-116c-49f3-bbfc-0ea7c688e3df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.345960 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t266x\" (UniqueName: \"kubernetes.io/projected/0704231d-de7e-4317-80bd-9edbb5a0de5f-kube-api-access-t266x\") pod \"console-operator-58897d9998-58d6k\" (UID: \"0704231d-de7e-4317-80bd-9edbb5a0de5f\") " pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.363219 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.366253 4698 request.go:700] Waited for 1.005143322s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.366944 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw22h\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.388378 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.397874 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.398076 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.898045871 +0000 UTC m=+139.595345287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.398157 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.398595 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.898585757 +0000 UTC m=+139.595885233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.409695 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7rb\" (UniqueName: \"kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb\") pod \"console-f9d7485db-f47kf\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.425419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68bf4f74-6117-4975-8c5f-b5b35b97c787-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ttftp\" (UID: \"68bf4f74-6117-4975-8c5f-b5b35b97c787\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.434537 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzdgf\" (UniqueName: \"kubernetes.io/projected/69fe44d1-6015-4929-b197-0ea5f0167131-kube-api-access-fzdgf\") pod \"machine-approver-56656f9798-79s46\" (UID: \"69fe44d1-6015-4929-b197-0ea5f0167131\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.458918 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.460973 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmxzf"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.461485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jjq\" (UniqueName: \"kubernetes.io/projected/6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f-kube-api-access-26jjq\") pod \"machine-config-server-c9jqn\" (UID: \"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f\") " pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.462159 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m9mft\" (UID: \"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.463372 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.480144 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.483488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvd2\" (UniqueName: \"kubernetes.io/projected/487b5c84-fe72-4b1c-8afa-15681f3d2c34-kube-api-access-4pvd2\") pod \"csi-hostpathplugin-w8pmp\" (UID: \"487b5c84-fe72-4b1c-8afa-15681f3d2c34\") " pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.494253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.499132 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.499645 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:17.999612836 +0000 UTC m=+139.696912252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.500971 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76wkc\" (UniqueName: \"kubernetes.io/projected/72c6c34c-666c-41e3-8c36-5ac3578ff330-kube-api-access-76wkc\") pod \"service-ca-operator-777779d784-rtnz2\" (UID: \"72c6c34c-666c-41e3-8c36-5ac3578ff330\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.502336 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c9jqn" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.512204 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.521291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f74db80-958b-4799-864f-792892d9903e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xnfjx\" (UID: \"3f74db80-958b-4799-864f-792892d9903e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:17 crc kubenswrapper[4698]: W1014 09:59:17.541785 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77041b5d_f53d_425c_b824_a61833af677c.slice/crio-cd3ef00a6e587a706ae7ba9f936dd866faab211f09b3e6a19b2f571cf7646535 WatchSource:0}: Error finding container cd3ef00a6e587a706ae7ba9f936dd866faab211f09b3e6a19b2f571cf7646535: Status 404 returned error can't find the container with id cd3ef00a6e587a706ae7ba9f936dd866faab211f09b3e6a19b2f571cf7646535 Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.547218 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7fv2\" (UniqueName: \"kubernetes.io/projected/d11979c8-404a-4ab4-9d27-814013edd944-kube-api-access-d7fv2\") pod \"ingress-canary-kvctw\" (UID: \"d11979c8-404a-4ab4-9d27-814013edd944\") " pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.561465 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.567889 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tr29\" (UniqueName: \"kubernetes.io/projected/c5df8277-9e0d-4d53-ad20-20a07ceb9515-kube-api-access-6tr29\") pod \"olm-operator-6b444d44fb-vqx66\" (UID: \"c5df8277-9e0d-4d53-ad20-20a07ceb9515\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.568454 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.587988 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9t95\" (UniqueName: \"kubernetes.io/projected/9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2-kube-api-access-x9t95\") pod \"machine-config-controller-84d6567774-sgf88\" (UID: \"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.590072 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.600707 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.601053 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.101037037 +0000 UTC m=+139.798336463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.609319 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz9m8\" (UniqueName: \"kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8\") pod \"marketplace-operator-79b997595-c8pq6\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.628607 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94g47\" (UniqueName: \"kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47\") pod \"collect-profiles-29340585-4sc7c\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.641544 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.644531 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sswhc\" (UniqueName: \"kubernetes.io/projected/b8186130-0e09-455d-92c7-05e4c0af37de-kube-api-access-sswhc\") pod \"catalog-operator-68c6474976-kftst\" (UID: \"b8186130-0e09-455d-92c7-05e4c0af37de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.643574 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.658183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.667779 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwlfw\" (UniqueName: \"kubernetes.io/projected/0bf22386-43f0-4d64-abb0-cdec28434502-kube-api-access-mwlfw\") pod \"downloads-7954f5f757-7fm6f\" (UID: \"0bf22386-43f0-4d64-abb0-cdec28434502\") " pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.675002 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.685237 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.687363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwf98\" (UniqueName: \"kubernetes.io/projected/4886c701-aad2-4ae4-bb99-0221728df342-kube-api-access-nwf98\") pod \"apiserver-7bbb656c7d-t5x7p\" (UID: \"4886c701-aad2-4ae4-bb99-0221728df342\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.694707 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.699754 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.703970 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.704643 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.204618742 +0000 UTC m=+139.901918158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.705203 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cfr5\" (UniqueName: \"kubernetes.io/projected/9103615a-2665-4033-8114-259b0e56879f-kube-api-access-6cfr5\") pod \"etcd-operator-b45778765-sjgjr\" (UID: \"9103615a-2665-4033-8114-259b0e56879f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.707076 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.728727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm6r\" (UniqueName: \"kubernetes.io/projected/22026c45-f849-4069-b5ad-4bc34d0ea6eb-kube-api-access-zvm6r\") pod \"dns-default-nt998\" (UID: \"22026c45-f849-4069-b5ad-4bc34d0ea6eb\") " pod="openshift-dns/dns-default-nt998" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.745831 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7jk\" (UniqueName: \"kubernetes.io/projected/818edbba-2627-4978-8a34-005689059b24-kube-api-access-4h7jk\") pod \"machine-config-operator-74547568cd-vdj26\" (UID: \"818edbba-2627-4978-8a34-005689059b24\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.749469 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlnpq"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.757451 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.759647 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jg26j"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.761946 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86cx\" (UniqueName: \"kubernetes.io/projected/4bc0ae50-422f-4bd4-abba-008f5ca0467f-kube-api-access-x86cx\") pod \"packageserver-d55dfcdfc-s65qn\" (UID: \"4bc0ae50-422f-4bd4-abba-008f5ca0467f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.764320 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.770510 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.785940 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpskz\" (UniqueName: \"kubernetes.io/projected/012a71ed-3195-49f8-bde1-f5455806e0f0-kube-api-access-cpskz\") pod \"service-ca-9c57cc56f-h8xtm\" (UID: \"012a71ed-3195-49f8-bde1-f5455806e0f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.800421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" event={"ID":"69fe44d1-6015-4929-b197-0ea5f0167131","Type":"ContainerStarted","Data":"383890c2ffeed5798ce8570ad5045293b1a79ba34853ae91c5cc2be1c50ff716"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.803283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" event={"ID":"10f74f9f-3784-4bb5-8fcf-acc6d625f363","Type":"ContainerStarted","Data":"07733131f1760aa4e26c2c040ac866b3cb1af83c107810454c4b806d55fa8719"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.804416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" event={"ID":"77041b5d-f53d-425c-b824-a61833af677c","Type":"ContainerStarted","Data":"e2887a8009e33250731ded58d2cf972429ea4fa0ee79134e9c778f8dbccb8f23"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.804452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" event={"ID":"77041b5d-f53d-425c-b824-a61833af677c","Type":"ContainerStarted","Data":"cd3ef00a6e587a706ae7ba9f936dd866faab211f09b3e6a19b2f571cf7646535"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.806256 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.806608 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.306595039 +0000 UTC m=+140.003894445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.807369 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bn9f\" (UniqueName: \"kubernetes.io/projected/c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7-kube-api-access-2bn9f\") pod \"control-plane-machine-set-operator-78cbb6b69f-5k2p9\" (UID: \"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.808470 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nt998" Oct 14 09:59:17 crc kubenswrapper[4698]: W1014 09:59:17.808791 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14729fe2_c0c5_49b8_9766_b35a97d66e8d.slice/crio-2591872cc57dc8c4364dab38addd48187ff13d6f40f1a85b81a7925b028ea84b WatchSource:0}: Error finding container 2591872cc57dc8c4364dab38addd48187ff13d6f40f1a85b81a7925b028ea84b: Status 404 returned error can't find the container with id 2591872cc57dc8c4364dab38addd48187ff13d6f40f1a85b81a7925b028ea84b Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.809565 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c9jqn" event={"ID":"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f","Type":"ContainerStarted","Data":"9273ee5da21d331300ac586d1db8ae7fe02ecae8703959b68762a668bf9e6ac9"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.809600 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c9jqn" event={"ID":"6b1bbc6d-62ec-47f7-b3d0-027b4dc2e31f","Type":"ContainerStarted","Data":"8f5c2f6de3ee6ee45a303e8feccb9b38fb20e2cacdcb1724f1805e7300641800"} Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.816701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kvctw" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.827643 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.837630 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.840833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g579\" (UniqueName: \"kubernetes.io/projected/41f1faca-9336-4fb7-85a7-14541f2cf578-kube-api-access-8g579\") pod \"package-server-manager-789f6589d5-8g7h6\" (UID: \"41f1faca-9336-4fb7-85a7-14541f2cf578\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.841361 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.848261 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tbpw\" (UniqueName: \"kubernetes.io/projected/9b71de96-0379-46a7-af50-9b831b50268b-kube-api-access-8tbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-ptt5h\" (UID: \"9b71de96-0379-46a7-af50-9b831b50268b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.852121 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.862867 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4r2m\" (UniqueName: \"kubernetes.io/projected/193ddda3-4410-402c-a198-33ff7ea3a740-kube-api-access-z4r2m\") pod \"router-default-5444994796-qprfx\" (UID: \"193ddda3-4410-402c-a198-33ff7ea3a740\") " pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.882466 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xk5\" (UniqueName: \"kubernetes.io/projected/a3f47cc4-719c-4e84-871b-ed52e9660cdb-kube-api-access-c6xk5\") pod \"migrator-59844c95c7-mddmt\" (UID: \"a3f47cc4-719c-4e84-871b-ed52e9660cdb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" Oct 14 09:59:17 crc kubenswrapper[4698]: W1014 09:59:17.898650 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e52335_6348_488a_a36a_8971b953737b.slice/crio-061b540243c73d30661b5e59bb07819d7ec3f972166e42882fbe99531f2baeaa WatchSource:0}: Error finding container 061b540243c73d30661b5e59bb07819d7ec3f972166e42882fbe99531f2baeaa: Status 404 returned error can't find the container with id 061b540243c73d30661b5e59bb07819d7ec3f972166e42882fbe99531f2baeaa Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.905095 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41d42df0-f10f-4f75-8481-adb4d51c341f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7p6xz\" (UID: \"41d42df0-f10f-4f75-8481-adb4d51c341f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.906910 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:17 crc kubenswrapper[4698]: E1014 09:59:17.908224 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.408205555 +0000 UTC m=+140.105504971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.934054 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.948942 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.963644 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.963722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4f9\" (UniqueName: \"kubernetes.io/projected/bc1091b3-dceb-44a6-95b0-8048efec8032-kube-api-access-nc4f9\") pod \"openshift-apiserver-operator-796bbdcf4f-cf62t\" (UID: \"bc1091b3-dceb-44a6-95b0-8048efec8032\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:17 crc kubenswrapper[4698]: I1014 09:59:17.978042 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.009713 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.011395 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.510070529 +0000 UTC m=+140.207369945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.017510 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.021784 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.032034 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.041026 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.048211 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.111074 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.111365 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.611346656 +0000 UTC m=+140.308646072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.189011 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.195120 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7mmrz"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.212850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.213146 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.713134308 +0000 UTC m=+140.410433724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.219938 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.316236 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.316440 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.816401183 +0000 UTC m=+140.513700629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.316722 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.317122 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.817110354 +0000 UTC m=+140.514409770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.424043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.424337 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:18.924319164 +0000 UTC m=+140.621618580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: W1014 09:59:18.514672 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23ade35_ff68_4366_8b6a_9e24fcd4e0eb.slice/crio-9ea7b4a24cd831e6534100f1d3261a75d0a2cc85ca00a523b8115b74bbbc90a8 WatchSource:0}: Error finding container 9ea7b4a24cd831e6534100f1d3261a75d0a2cc85ca00a523b8115b74bbbc90a8: Status 404 returned error can't find the container with id 9ea7b4a24cd831e6534100f1d3261a75d0a2cc85ca00a523b8115b74bbbc90a8 Oct 14 09:59:18 crc kubenswrapper[4698]: W1014 09:59:18.516293 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabe6a35d_8cd2_4749_b9cf_8d11f6169470.slice/crio-e2f8894e500e1913b997dda19bafa5f56d8dba598ca411c8f7235a7930cfee29 WatchSource:0}: Error finding container e2f8894e500e1913b997dda19bafa5f56d8dba598ca411c8f7235a7930cfee29: Status 404 returned error can't find the container with id e2f8894e500e1913b997dda19bafa5f56d8dba598ca411c8f7235a7930cfee29 Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.525557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.525897 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.025885219 +0000 UTC m=+140.723184635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.629266 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.629713 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.12969315 +0000 UTC m=+140.826992566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.731560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.731921 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.231908745 +0000 UTC m=+140.929208161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.769457 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.825674 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-58d6k"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.827654 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hshkc"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.833343 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.834033 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.334007505 +0000 UTC m=+141.031306921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.835494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" event={"ID":"d671e120-cf7e-4363-b023-dc46b51ea073","Type":"ContainerStarted","Data":"1e370f867d4aad9b8f799de76be4b58ae43b68b36b2c2378d7795036ef5dfe58"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.835542 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" event={"ID":"d671e120-cf7e-4363-b023-dc46b51ea073","Type":"ContainerStarted","Data":"ec76df0f5ea37795d93c4266ad4c6c30a7343db99ccf6482f470f930b42fa88e"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.836425 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qprfx" event={"ID":"193ddda3-4410-402c-a198-33ff7ea3a740","Type":"ContainerStarted","Data":"0f42bf1622b7e7d312612688647cdc45ae5711e259e9a5ed48d16bf5ec370955"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.838600 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w8pmp"] Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.838835 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f47kf" event={"ID":"abe6a35d-8cd2-4749-b9cf-8d11f6169470","Type":"ContainerStarted","Data":"e2f8894e500e1913b997dda19bafa5f56d8dba598ca411c8f7235a7930cfee29"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.843733 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" event={"ID":"67e52335-6348-488a-a36a-8971b953737b","Type":"ContainerStarted","Data":"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.844102 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" event={"ID":"67e52335-6348-488a-a36a-8971b953737b","Type":"ContainerStarted","Data":"061b540243c73d30661b5e59bb07819d7ec3f972166e42882fbe99531f2baeaa"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.844728 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.847393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" event={"ID":"14729fe2-c0c5-49b8-9766-b35a97d66e8d","Type":"ContainerStarted","Data":"2591872cc57dc8c4364dab38addd48187ff13d6f40f1a85b81a7925b028ea84b"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.848962 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" event={"ID":"69fe44d1-6015-4929-b197-0ea5f0167131","Type":"ContainerStarted","Data":"f4ec24e31126ebcccefad0ff9de094e9126fe68d7f2657c7c36abc48bb9fe158"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.849261 4698 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c6gg9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.849440 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" podUID="67e52335-6348-488a-a36a-8971b953737b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.855641 4698 generic.go:334] "Generic (PLEG): container finished" podID="1d5afbbe-a0bd-492f-8b7d-691208ef27db" containerID="eb62d7bf5d38e5496cd39b59150f52467050b8e2791c96b5e1ee9d1848b403a2" exitCode=0 Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.855963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" event={"ID":"1d5afbbe-a0bd-492f-8b7d-691208ef27db","Type":"ContainerDied","Data":"eb62d7bf5d38e5496cd39b59150f52467050b8e2791c96b5e1ee9d1848b403a2"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.856041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" event={"ID":"1d5afbbe-a0bd-492f-8b7d-691208ef27db","Type":"ContainerStarted","Data":"36b481ec96f3e387949102f77a5b0c984f4ad760c2d694a444adb7d80a910074"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.865026 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" event={"ID":"77041b5d-f53d-425c-b824-a61833af677c","Type":"ContainerStarted","Data":"9c77d4b9c6e105970d9fcf5b9ea0370d9ff3f70878592d5fe84249515abbf392"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.870416 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" event={"ID":"d746febc-7247-498c-86b1-8cb4640cbccc","Type":"ContainerStarted","Data":"0d1d7ce638171e21b5d2145ad95ed9bada1b340e3e1ab98a48601124634734bd"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.870786 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.872448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" event={"ID":"10f74f9f-3784-4bb5-8fcf-acc6d625f363","Type":"ContainerStarted","Data":"aff27f56be1939a7c51b7fadc7d4d74350416184aac55d88f90fd27bb9c2b85a"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.872817 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lbk6k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.872861 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.875037 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" event={"ID":"5eed092c-2837-48df-8eb4-8759235349b6","Type":"ContainerStarted","Data":"cc16641dc5082e4efa0f7365a32a696363b9b97f1a69878d59e1c8c9fc61f74a"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.876236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" event={"ID":"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb","Type":"ContainerStarted","Data":"9ea7b4a24cd831e6534100f1d3261a75d0a2cc85ca00a523b8115b74bbbc90a8"} Oct 14 09:59:18 crc kubenswrapper[4698]: I1014 09:59:18.935455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:18 crc kubenswrapper[4698]: E1014 09:59:18.937527 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.437513398 +0000 UTC m=+141.134812804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.042327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.044029 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.544007258 +0000 UTC m=+141.241306684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.144321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.144935 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.644905213 +0000 UTC m=+141.342204629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.176636 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.183182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5x56z"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.184142 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.193525 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.201928 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7fm6f"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.205904 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.210578 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.222917 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.227126 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.227167 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjgjr"] Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.241495 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf22386_43f0_4d64_abb0_cdec28434502.slice/crio-1cf81747020960f69bec412d567b799089b27d43efe3b80f61ec1a4588d035ff WatchSource:0}: Error finding container 1cf81747020960f69bec412d567b799089b27d43efe3b80f61ec1a4588d035ff: Status 404 returned error can't find the container with id 1cf81747020960f69bec412d567b799089b27d43efe3b80f61ec1a4588d035ff Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.246621 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.249315 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.249358 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6"] Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.249627 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.749607741 +0000 UTC m=+141.446907187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.282986 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f74db80_958b_4799_864f_792892d9903e.slice/crio-b91f328e8d36be973f8c88cfa34e1cc6301be7aa70290bb374f97a1a6dac997b WatchSource:0}: Error finding container b91f328e8d36be973f8c88cfa34e1cc6301be7aa70290bb374f97a1a6dac997b: Status 404 returned error can't find the container with id b91f328e8d36be973f8c88cfa34e1cc6301be7aa70290bb374f97a1a6dac997b Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.288214 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.303461 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.307534 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kvctw"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.309565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nt998"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.320136 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.327375 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.351906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.352227 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.852214466 +0000 UTC m=+141.549513882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.354953 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz"] Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.375944 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc1091b3_dceb_44a6_95b0_8048efec8032.slice/crio-bab3004b39b724dffaf1e0f6dc399352856b7504c6bf9d67e6c78168c005f8ae WatchSource:0}: Error finding container bab3004b39b724dffaf1e0f6dc399352856b7504c6bf9d67e6c78168c005f8ae: Status 404 returned error can't find the container with id bab3004b39b724dffaf1e0f6dc399352856b7504c6bf9d67e6c78168c005f8ae Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.393431 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11979c8_404a_4ab4_9d27_814013edd944.slice/crio-a17d9c5580e4f336ae1fa118ee341e784432703113e70caf09b33d5830dbda46 WatchSource:0}: Error finding container a17d9c5580e4f336ae1fa118ee341e784432703113e70caf09b33d5830dbda46: Status 404 returned error can't find the container with id a17d9c5580e4f336ae1fa118ee341e784432703113e70caf09b33d5830dbda46 Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.443939 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-c9jqn" podStartSLOduration=5.443920322 podStartE2EDuration="5.443920322s" podCreationTimestamp="2025-10-14 09:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:19.43706977 +0000 UTC m=+141.134369196" watchObservedRunningTime="2025-10-14 09:59:19.443920322 +0000 UTC m=+141.141219788" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.458447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.458745 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.958721037 +0000 UTC m=+141.656020453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.459121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.459470 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:19.959462859 +0000 UTC m=+141.656762265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.513650 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.536324 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.538169 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.559895 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.560182 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.060165428 +0000 UTC m=+141.757464844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.574599 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h8xtm"] Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.613121 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9"] Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.647497 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818edbba_2627_4978_8a34_005689059b24.slice/crio-47cf11ab504a1ce643fa8939f8e6e4aff593a997a7ffabfaaf6fec64eb517105 WatchSource:0}: Error finding container 47cf11ab504a1ce643fa8939f8e6e4aff593a997a7ffabfaaf6fec64eb517105: Status 404 returned error can't find the container with id 47cf11ab504a1ce643fa8939f8e6e4aff593a997a7ffabfaaf6fec64eb517105 Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.662024 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.662324 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.162313401 +0000 UTC m=+141.859612817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.676878 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-24kzx" podStartSLOduration=121.676861398 podStartE2EDuration="2m1.676861398s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:19.675099737 +0000 UTC m=+141.372399153" watchObservedRunningTime="2025-10-14 09:59:19.676861398 +0000 UTC m=+141.374160814" Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.725029 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod012a71ed_3195_49f8_bde1_f5455806e0f0.slice/crio-5f1f39aa643166394eb3f79631960a9d91a5aa159834a44e9b1c42da06fdd8ce WatchSource:0}: Error finding container 5f1f39aa643166394eb3f79631960a9d91a5aa159834a44e9b1c42da06fdd8ce: Status 404 returned error can't find the container with id 5f1f39aa643166394eb3f79631960a9d91a5aa159834a44e9b1c42da06fdd8ce Oct 14 09:59:19 crc kubenswrapper[4698]: W1014 09:59:19.728302 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1ec2959_9fc5_4b98_8f9c_c21fc57e14d7.slice/crio-da9d69907891dc2a92839bd302d4af6f72add0a76bd136fef0abd9003e3ed7d9 WatchSource:0}: Error finding container da9d69907891dc2a92839bd302d4af6f72add0a76bd136fef0abd9003e3ed7d9: Status 404 returned error can't find the container with id da9d69907891dc2a92839bd302d4af6f72add0a76bd136fef0abd9003e3ed7d9 Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.763201 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.763507 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.263489364 +0000 UTC m=+141.960788780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.818666 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" podStartSLOduration=120.818644296 podStartE2EDuration="2m0.818644296s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:19.810861507 +0000 UTC m=+141.508160963" watchObservedRunningTime="2025-10-14 09:59:19.818644296 +0000 UTC m=+141.515943712" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.843880 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmxzf" podStartSLOduration=121.843754054 podStartE2EDuration="2m1.843754054s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:19.843090344 +0000 UTC m=+141.540389760" watchObservedRunningTime="2025-10-14 09:59:19.843754054 +0000 UTC m=+141.541053470" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.865277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.865582 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.365570565 +0000 UTC m=+142.062869981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.918355 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" event={"ID":"c5df8277-9e0d-4d53-ad20-20a07ceb9515","Type":"ContainerStarted","Data":"f5cf2b7efe9c1abef0dc1a9f22ccd83370fdb79d5100124708d0c243447f2815"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.918417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" event={"ID":"c5df8277-9e0d-4d53-ad20-20a07ceb9515","Type":"ContainerStarted","Data":"a16804a3c571776daf4bce99a4631e9df55b014b99b989aeb79b20099a84bdbe"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.918638 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.923963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" event={"ID":"9b71de96-0379-46a7-af50-9b831b50268b","Type":"ContainerStarted","Data":"41cf90f892ce610cb284a9eb41f4e0c19ee5fb4898282a9368f60c9b7d8b9263"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.926078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-58d6k" event={"ID":"0704231d-de7e-4317-80bd-9edbb5a0de5f","Type":"ContainerStarted","Data":"4c9ece1b267a37c81da34a3af9f38bc5a35a6f181847da45a6ae45065c008399"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.926133 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-58d6k" event={"ID":"0704231d-de7e-4317-80bd-9edbb5a0de5f","Type":"ContainerStarted","Data":"39831b7be78390ca55cbbe614d40dc7e525910d782fbf75d2add2af74d586e22"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.927022 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.928631 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" event={"ID":"41d42df0-f10f-4f75-8481-adb4d51c341f","Type":"ContainerStarted","Data":"01561a8e9abd55e540d8d03a2bf32d8e0595fc64591a9500aa73ef11bf9598b4"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.929539 4698 patch_prober.go:28] interesting pod/console-operator-58897d9998-58d6k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.929573 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-58d6k" podUID="0704231d-de7e-4317-80bd-9edbb5a0de5f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.930380 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt998" event={"ID":"22026c45-f849-4069-b5ad-4bc34d0ea6eb","Type":"ContainerStarted","Data":"72c815e0d72317a36cf3c41688d0615785c36b9b491502187eabd8c75de25f5b"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.934706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" event={"ID":"08880ea8-e0f2-4963-826f-9bee32ca8a64","Type":"ContainerStarted","Data":"07a26df22b4777f0659a8d4fce06d34a140c28086f4bc0a59e133f77d2991274"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.937032 4698 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vqx66 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.937072 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" podUID="c5df8277-9e0d-4d53-ad20-20a07ceb9515" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.941211 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" event={"ID":"14729fe2-c0c5-49b8-9766-b35a97d66e8d","Type":"ContainerStarted","Data":"a53712bf92731d399fe877c7403dead1cf01ae1cf1bc10f8e4b58c9ef93252a6"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.941240 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" event={"ID":"14729fe2-c0c5-49b8-9766-b35a97d66e8d","Type":"ContainerStarted","Data":"df982f134e0f0a934704b6c61c7d0557ea695274bae916f50537f9c0cb2e0147"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.943380 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" event={"ID":"4bc0ae50-422f-4bd4-abba-008f5ca0467f","Type":"ContainerStarted","Data":"bd401e90a41fbd2125ff09313745673f61b98156373b38a0c6d73ffd6f9a8069"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.965661 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" event={"ID":"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7","Type":"ContainerStarted","Data":"da9d69907891dc2a92839bd302d4af6f72add0a76bd136fef0abd9003e3ed7d9"} Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.966137 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.966220 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.466202793 +0000 UTC m=+142.163502209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.967078 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:19 crc kubenswrapper[4698]: E1014 09:59:19.967893 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.467873052 +0000 UTC m=+142.165172468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:19 crc kubenswrapper[4698]: I1014 09:59:19.988585 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kvctw" event={"ID":"d11979c8-404a-4ab4-9d27-814013edd944","Type":"ContainerStarted","Data":"a17d9c5580e4f336ae1fa118ee341e784432703113e70caf09b33d5830dbda46"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.003427 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7fm6f" event={"ID":"0bf22386-43f0-4d64-abb0-cdec28434502","Type":"ContainerStarted","Data":"69d8d401779aeef8ba306541e88ca6c8b561468c3b76db5c77013b564092b1b5"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.004234 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7fm6f" event={"ID":"0bf22386-43f0-4d64-abb0-cdec28434502","Type":"ContainerStarted","Data":"1cf81747020960f69bec412d567b799089b27d43efe3b80f61ec1a4588d035ff"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.005431 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.016777 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.016825 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.017634 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" event={"ID":"1b225a95-8db7-45c2-ad7c-b4bc80b6e875","Type":"ContainerStarted","Data":"a91d85563c15a8b80a0e98c02acfe6677c0aa6f82fd46f98e80cd1d34f65207a"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.017655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" event={"ID":"1b225a95-8db7-45c2-ad7c-b4bc80b6e875","Type":"ContainerStarted","Data":"f03d2236cdc21d35b00c0f987c118475eea57648eaefb5011c16e4f9fbdf1901"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.022778 4698 generic.go:334] "Generic (PLEG): container finished" podID="2bc82783-cf82-45df-94d5-60de2f1a0bdf" containerID="0ad67aec7510de5ff4f4ea7853ada31a5f5cd534fbdacc0df74b2a298fc0598e" exitCode=0 Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.022844 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" event={"ID":"2bc82783-cf82-45df-94d5-60de2f1a0bdf","Type":"ContainerDied","Data":"0ad67aec7510de5ff4f4ea7853ada31a5f5cd534fbdacc0df74b2a298fc0598e"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.022871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" event={"ID":"2bc82783-cf82-45df-94d5-60de2f1a0bdf","Type":"ContainerStarted","Data":"ecda509b17728c66d6dca72ee0c0f17947b8dfc73677403b7ad8c2272b1edc1f"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.032802 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" event={"ID":"69fe44d1-6015-4929-b197-0ea5f0167131","Type":"ContainerStarted","Data":"cc4f427d36a84911678bc5753053151a9eee33acff2752fb2bd9c4aa2470c13d"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.068049 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.069998 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.569980393 +0000 UTC m=+142.267279809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.075007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" event={"ID":"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb","Type":"ContainerStarted","Data":"f0cb031221620b428cee43703eee64618f09f78c9917ea42e0f5a3175fc9373c"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.081097 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" event={"ID":"41f1faca-9336-4fb7-85a7-14541f2cf578","Type":"ContainerStarted","Data":"adbe82e7c4470d0cd8074d1f1771918853cc6fb3ed17d51b9e9acaae632e0005"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.106841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" event={"ID":"d746febc-7247-498c-86b1-8cb4640cbccc","Type":"ContainerStarted","Data":"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.108032 4698 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lbk6k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.108089 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.169204 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.170174 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.670161657 +0000 UTC m=+142.367461073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.184388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" event={"ID":"5eed092c-2837-48df-8eb4-8759235349b6","Type":"ContainerStarted","Data":"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.185281 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.186414 4698 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kqg88 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.186474 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" podUID="5eed092c-2837-48df-8eb4-8759235349b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.197215 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" event={"ID":"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4","Type":"ContainerStarted","Data":"48f088b78239a573bbc42b00299afbfb622447d602d8839fef04c765eaa675a0"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.202082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" event={"ID":"487b5c84-fe72-4b1c-8afa-15681f3d2c34","Type":"ContainerStarted","Data":"8f9cff62467da43de41245b9d1c671708e0a6b45baaff2f6c9b8fc8e808b77d9"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.222002 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" event={"ID":"a3f47cc4-719c-4e84-871b-ed52e9660cdb","Type":"ContainerStarted","Data":"5d547d153ea14353774df27bb5de8d8175b0ffed9b902c745140b2be275f3d9c"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.232370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" event={"ID":"9103615a-2665-4033-8114-259b0e56879f","Type":"ContainerStarted","Data":"1029ffe540cf2559a6cfedd1275a54e3bcb3bd8ebf99bc32dbe45db88a0c76cc"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.240627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" event={"ID":"72c6c34c-666c-41e3-8c36-5ac3578ff330","Type":"ContainerStarted","Data":"96697f4f87fe80b8d96df4b60822718a05f37dc00dcac794ab86ff17c570343b"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.248733 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" event={"ID":"bc1091b3-dceb-44a6-95b0-8048efec8032","Type":"ContainerStarted","Data":"bab3004b39b724dffaf1e0f6dc399352856b7504c6bf9d67e6c78168c005f8ae"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.256239 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" event={"ID":"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2","Type":"ContainerStarted","Data":"aef49ec886d2ccf494e6836b8f70eaf63d23c92136ebb0dd72a3edf08547d7c9"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.259091 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" event={"ID":"1d5afbbe-a0bd-492f-8b7d-691208ef27db","Type":"ContainerStarted","Data":"de578d073275cc8d563f9af5ef7b3a9ea43c10feabe6032fc611e1cf8ed0d517"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.259535 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.261021 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qprfx" event={"ID":"193ddda3-4410-402c-a198-33ff7ea3a740","Type":"ContainerStarted","Data":"d71a7430b36041ef482c3a137432d629d5e5fda12aab83bceadd61b56a3668b1"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.270405 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.271597 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.771578208 +0000 UTC m=+142.468877624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.289417 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" event={"ID":"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084","Type":"ContainerStarted","Data":"f5f93cb2ce07b59a864c6136e8d41bcb951067304dcb63d58f60706f24194982"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.289465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" event={"ID":"87bb9fc3-f75a-48d6-a8e6-8f8d1aa7f084","Type":"ContainerStarted","Data":"94b0ed38faa15d9565b8f5ec3604e720ed5486a0a9837c5641f72c7021fd295b"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.303717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" event={"ID":"b8186130-0e09-455d-92c7-05e4c0af37de","Type":"ContainerStarted","Data":"0c3fbe9bf91d9dfee815cec3736c774f1cb380edb8d380682b625876081e646c"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.304523 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.308057 4698 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kftst container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.308111 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" podUID="b8186130-0e09-455d-92c7-05e4c0af37de" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.312142 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" event={"ID":"10f74f9f-3784-4bb5-8fcf-acc6d625f363","Type":"ContainerStarted","Data":"f19f752e44aca780d8fdad98b603b6a13a55d4b645a6038879a6aee2f4632c3f"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.319566 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" podStartSLOduration=122.319549258 podStartE2EDuration="2m2.319549258s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.318675122 +0000 UTC m=+142.015974558" watchObservedRunningTime="2025-10-14 09:59:20.319549258 +0000 UTC m=+142.016848674" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.326262 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" event={"ID":"012a71ed-3195-49f8-bde1-f5455806e0f0","Type":"ContainerStarted","Data":"5f1f39aa643166394eb3f79631960a9d91a5aa159834a44e9b1c42da06fdd8ce"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.328223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" event={"ID":"3f74db80-958b-4799-864f-792892d9903e","Type":"ContainerStarted","Data":"b91f328e8d36be973f8c88cfa34e1cc6301be7aa70290bb374f97a1a6dac997b"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.360200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" event={"ID":"33a325fe-116c-49f3-bbfc-0ea7c688e3df","Type":"ContainerStarted","Data":"ce6815d8ba04f4a9dd48bb7315ca9184f04c3c961f67ac5f92ee6eeccd3a4805"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.365822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" event={"ID":"818edbba-2627-4978-8a34-005689059b24","Type":"ContainerStarted","Data":"47cf11ab504a1ce643fa8939f8e6e4aff593a997a7ffabfaaf6fec64eb517105"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.368134 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" event={"ID":"4886c701-aad2-4ae4-bb99-0221728df342","Type":"ContainerStarted","Data":"7cdf631fb498d8baa07ac36384ce78d93319700708a8ee09c3c2051b108eb698"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.377628 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.379692 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.879669995 +0000 UTC m=+142.576969411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.381676 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" event={"ID":"68bf4f74-6117-4975-8c5f-b5b35b97c787","Type":"ContainerStarted","Data":"5d9b82c57ea34fba91587ad5c7f87250c06de9e50edeebfcfa2cc86bb882514f"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.381726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" event={"ID":"68bf4f74-6117-4975-8c5f-b5b35b97c787","Type":"ContainerStarted","Data":"089d277d730e7d2572376027efa953ee80f256080d2d1332ba78bb72bd4eaa77"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.391016 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f47kf" event={"ID":"abe6a35d-8cd2-4749-b9cf-8d11f6169470","Type":"ContainerStarted","Data":"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e"} Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.420941 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.482388 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.484658 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:20.98463613 +0000 UTC m=+142.681935556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.588623 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.595005 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.094988034 +0000 UTC m=+142.792287450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.690054 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.690421 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.190403138 +0000 UTC m=+142.887702554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.791175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.791885 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.2918684 +0000 UTC m=+142.989167806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.800203 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-58d6k" podStartSLOduration=122.800184895 podStartE2EDuration="2m2.800184895s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.799520375 +0000 UTC m=+142.496819791" watchObservedRunningTime="2025-10-14 09:59:20.800184895 +0000 UTC m=+142.497484311" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.879583 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ttftp" podStartSLOduration=122.879565098 podStartE2EDuration="2m2.879565098s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.877166957 +0000 UTC m=+142.574466373" watchObservedRunningTime="2025-10-14 09:59:20.879565098 +0000 UTC m=+142.576864504" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.894980 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:20 crc kubenswrapper[4698]: E1014 09:59:20.895748 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.395728793 +0000 UTC m=+143.093028209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.911758 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" podStartSLOduration=121.911744174 podStartE2EDuration="2m1.911744174s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.910141987 +0000 UTC m=+142.607441423" watchObservedRunningTime="2025-10-14 09:59:20.911744174 +0000 UTC m=+142.609043590" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.972557 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7fm6f" podStartSLOduration=122.972542821 podStartE2EDuration="2m2.972542821s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.943042014 +0000 UTC m=+142.640341440" watchObservedRunningTime="2025-10-14 09:59:20.972542821 +0000 UTC m=+142.669842237" Oct 14 09:59:20 crc kubenswrapper[4698]: I1014 09:59:20.973105 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qprfx" podStartSLOduration=122.973102507 podStartE2EDuration="2m2.973102507s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:20.97113903 +0000 UTC m=+142.668438446" watchObservedRunningTime="2025-10-14 09:59:20.973102507 +0000 UTC m=+142.670401923" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.007151 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.007495 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.507484348 +0000 UTC m=+143.204783764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.007890 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" podStartSLOduration=122.007875659 podStartE2EDuration="2m2.007875659s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.005369026 +0000 UTC m=+142.702668462" watchObservedRunningTime="2025-10-14 09:59:21.007875659 +0000 UTC m=+142.705175065" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.031024 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.050490 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:21 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:21 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:21 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.050532 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.078248 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m9mft" podStartSLOduration=123.078230127 podStartE2EDuration="2m3.078230127s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.033654767 +0000 UTC m=+142.730954183" watchObservedRunningTime="2025-10-14 09:59:21.078230127 +0000 UTC m=+142.775529543" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.078413 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hshkc" podStartSLOduration=123.078408762 podStartE2EDuration="2m3.078408762s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.072308293 +0000 UTC m=+142.769607709" watchObservedRunningTime="2025-10-14 09:59:21.078408762 +0000 UTC m=+142.775708178" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.102619 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mlnpq" podStartSLOduration=123.102603093 podStartE2EDuration="2m3.102603093s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.101511831 +0000 UTC m=+142.798811287" watchObservedRunningTime="2025-10-14 09:59:21.102603093 +0000 UTC m=+142.799902519" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.110293 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.110637 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.610620799 +0000 UTC m=+143.307920215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.186066 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" podStartSLOduration=123.186051546 podStartE2EDuration="2m3.186051546s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.183376198 +0000 UTC m=+142.880675624" watchObservedRunningTime="2025-10-14 09:59:21.186051546 +0000 UTC m=+142.883350962" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.186526 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" podStartSLOduration=123.18652157 podStartE2EDuration="2m3.18652157s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.160583748 +0000 UTC m=+142.857883164" watchObservedRunningTime="2025-10-14 09:59:21.18652157 +0000 UTC m=+142.883820986" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.213744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.214117 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.714104491 +0000 UTC m=+143.411403907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.271211 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-79s46" podStartSLOduration=124.271187569 podStartE2EDuration="2m4.271187569s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.269124308 +0000 UTC m=+142.966423734" watchObservedRunningTime="2025-10-14 09:59:21.271187569 +0000 UTC m=+142.968486985" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.314151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.314400 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.814383958 +0000 UTC m=+143.511683374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.369307 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" podStartSLOduration=122.369291652 podStartE2EDuration="2m2.369291652s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.311281397 +0000 UTC m=+143.008580813" watchObservedRunningTime="2025-10-14 09:59:21.369291652 +0000 UTC m=+143.066591068" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.397603 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p266j" podStartSLOduration=124.397581793 podStartE2EDuration="2m4.397581793s" podCreationTimestamp="2025-10-14 09:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.37091044 +0000 UTC m=+143.068209856" watchObservedRunningTime="2025-10-14 09:59:21.397581793 +0000 UTC m=+143.094881219" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.399467 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" podStartSLOduration=123.399458589 podStartE2EDuration="2m3.399458589s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.395873943 +0000 UTC m=+143.093173379" watchObservedRunningTime="2025-10-14 09:59:21.399458589 +0000 UTC m=+143.096758005" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.410174 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" event={"ID":"4bc0ae50-422f-4bd4-abba-008f5ca0467f","Type":"ContainerStarted","Data":"0a525cdfa885a04cd4f77d0ce3fb4b10ec6509016c62a1629fc229d6680996b0"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.410493 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.412785 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" event={"ID":"c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7","Type":"ContainerStarted","Data":"27a03f72d95ce098305607d0be3d51f79d3a1f2f8894a582ab03b8cc01f4abfe"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.415670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.415742 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" podStartSLOduration=123.415715986 podStartE2EDuration="2m3.415715986s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.413595934 +0000 UTC m=+143.110895350" watchObservedRunningTime="2025-10-14 09:59:21.415715986 +0000 UTC m=+143.113015402" Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.420744 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:21.920722074 +0000 UTC m=+143.618021700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.420806 4698 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s65qn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.420865 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" podUID="4bc0ae50-422f-4bd4-abba-008f5ca0467f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.441182 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" event={"ID":"b23ade35-ff68-4366-8b6a-9e24fcd4e0eb","Type":"ContainerStarted","Data":"2f2fb2120c1de0e574ab67780ccd6e15457172d03a51677ffe2d9a7fec204223"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.454722 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-f47kf" podStartSLOduration=123.454705002 podStartE2EDuration="2m3.454705002s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.453728974 +0000 UTC m=+143.151028400" watchObservedRunningTime="2025-10-14 09:59:21.454705002 +0000 UTC m=+143.152004418" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.459726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kvctw" event={"ID":"d11979c8-404a-4ab4-9d27-814013edd944","Type":"ContainerStarted","Data":"17b5ee89c7d3f722170161b1083c3cbc924ec641d82c950bb0996caff155dc71"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.464944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" event={"ID":"33a325fe-116c-49f3-bbfc-0ea7c688e3df","Type":"ContainerStarted","Data":"f6d71e510fb2f3e16f3f94951eb0631017e13b88c496618947e54860cd04e0a2"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.465009 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" event={"ID":"33a325fe-116c-49f3-bbfc-0ea7c688e3df","Type":"ContainerStarted","Data":"a2431eb9f0ed4dc2db14cdb2d8cf27214ec05460d67a8ea82d3c296f8b778382"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.470827 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" event={"ID":"08880ea8-e0f2-4963-826f-9bee32ca8a64","Type":"ContainerStarted","Data":"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.472255 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.476729 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7mmrz" podStartSLOduration=123.476709599 podStartE2EDuration="2m3.476709599s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.475283227 +0000 UTC m=+143.172582663" watchObservedRunningTime="2025-10-14 09:59:21.476709599 +0000 UTC m=+143.174009015" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.486024 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" event={"ID":"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4","Type":"ContainerStarted","Data":"e44571d8c0581614f90fc1c2803d9e04b635a448b30211d6b555fd1ea83951f4"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.493950 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c8pq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.494029 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.514814 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" event={"ID":"a3f47cc4-719c-4e84-871b-ed52e9660cdb","Type":"ContainerStarted","Data":"10dd4e55283f1679ebf3e7c665f57984aa68fc25ba45bc9f95986bc4ff5283bc"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.514879 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" event={"ID":"a3f47cc4-719c-4e84-871b-ed52e9660cdb","Type":"ContainerStarted","Data":"5ec3a8d82d43cedc2224ec3b75cf0046c87aff90af093ae6898131434714e0cc"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.516826 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.517468 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.017448627 +0000 UTC m=+143.714748043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.535572 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5k2p9" podStartSLOduration=123.535520228 podStartE2EDuration="2m3.535520228s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.532716755 +0000 UTC m=+143.230016171" watchObservedRunningTime="2025-10-14 09:59:21.535520228 +0000 UTC m=+143.232819644" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.544918 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" event={"ID":"41f1faca-9336-4fb7-85a7-14541f2cf578","Type":"ContainerStarted","Data":"816e8192cf99bd91028a0389838918191a9c3ba087db04e10d6f36f98f20ddf3"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.544973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" event={"ID":"41f1faca-9336-4fb7-85a7-14541f2cf578","Type":"ContainerStarted","Data":"fa5e8128742ffbc54ddf10715ce72c2e54a6e712b9e66edccacb0a5d8832fdcf"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.545703 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.564977 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" podStartSLOduration=122.564961303 podStartE2EDuration="2m2.564961303s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.563063357 +0000 UTC m=+143.260362773" watchObservedRunningTime="2025-10-14 09:59:21.564961303 +0000 UTC m=+143.262260719" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.578114 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cf62t" event={"ID":"bc1091b3-dceb-44a6-95b0-8048efec8032","Type":"ContainerStarted","Data":"e6dea3fc6e35553ebeb06c7264dce281e374d6a35b80048102cdb3e31b5fef8f"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.604647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" event={"ID":"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2","Type":"ContainerStarted","Data":"637b444ce783748896d570f3348fa5bbaaf05d595bba81ade7393e31782f9fd2"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.604715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" event={"ID":"9a5b4d2f-ca5c-4178-9e26-2cd2dccaa0b2","Type":"ContainerStarted","Data":"4538ea1c9c581d44a4874bd834452795a0988df082a73075c40197f6572ff2cf"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.607338 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kvctw" podStartSLOduration=6.607316458 podStartE2EDuration="6.607316458s" podCreationTimestamp="2025-10-14 09:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.605928367 +0000 UTC m=+143.303227793" watchObservedRunningTime="2025-10-14 09:59:21.607316458 +0000 UTC m=+143.304615874" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.618057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.618622 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.11860706 +0000 UTC m=+143.815906476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.622847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" event={"ID":"2bc82783-cf82-45df-94d5-60de2f1a0bdf","Type":"ContainerStarted","Data":"ac47eeb05ea308a36ce4d02a0b09de5dd3a59ecc763571fbef7d50838f69989a"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.629100 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" event={"ID":"012a71ed-3195-49f8-bde1-f5455806e0f0","Type":"ContainerStarted","Data":"82c6ed313eb3d7a8d68634d02eff042a522539971a408f1fb77fa5da7f4d9e26"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.637622 4698 generic.go:334] "Generic (PLEG): container finished" podID="4886c701-aad2-4ae4-bb99-0221728df342" containerID="4088c5e3baa05c697f848f7ad7c304849ae20402dc7de9376c8e8997fcb49570" exitCode=0 Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.637692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" event={"ID":"4886c701-aad2-4ae4-bb99-0221728df342","Type":"ContainerDied","Data":"4088c5e3baa05c697f848f7ad7c304849ae20402dc7de9376c8e8997fcb49570"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.638263 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-svzm6" podStartSLOduration=123.638237867 podStartE2EDuration="2m3.638237867s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.636386942 +0000 UTC m=+143.333686378" watchObservedRunningTime="2025-10-14 09:59:21.638237867 +0000 UTC m=+143.335537283" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.698596 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" event={"ID":"9103615a-2665-4033-8114-259b0e56879f","Type":"ContainerStarted","Data":"7f435a2a0908d9d0cb73c3aec11701664e7c5114d05ad2a2cf02a24ac56d5f27"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.725205 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.725971 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" event={"ID":"41d42df0-f10f-4f75-8481-adb4d51c341f","Type":"ContainerStarted","Data":"d495cc52418aae1e6464e5f9c470f39b8d761f52394d5167aa57b9dcaac7a84b"} Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.726814 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.22679265 +0000 UTC m=+143.924092066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.728122 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mddmt" podStartSLOduration=123.728096378 podStartE2EDuration="2m3.728096378s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.682940521 +0000 UTC m=+143.380239947" watchObservedRunningTime="2025-10-14 09:59:21.728096378 +0000 UTC m=+143.425395794" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.771047 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt998" event={"ID":"22026c45-f849-4069-b5ad-4bc34d0ea6eb","Type":"ContainerStarted","Data":"288c586ef8ea890599c76390653a0b793b59e6fdf8e75da76e384818d349007b"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.771098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt998" event={"ID":"22026c45-f849-4069-b5ad-4bc34d0ea6eb","Type":"ContainerStarted","Data":"73f305af29f1a7108e2d3fb0c0f8532e706e6305e5d1316eaac689d03a8165fe"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.771673 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nt998" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.795567 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" event={"ID":"818edbba-2627-4978-8a34-005689059b24","Type":"ContainerStarted","Data":"426ccff17833c6eea574527a6c4a1897d172ff8d21bdfc5552e4080150a57e19"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.795623 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" event={"ID":"818edbba-2627-4978-8a34-005689059b24","Type":"ContainerStarted","Data":"92fd1ca2da35f6b9f195bd07f6026162ac97329333dc90aa597e31c36ed536e7"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.827502 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" podStartSLOduration=123.827483619 podStartE2EDuration="2m3.827483619s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.74007481 +0000 UTC m=+143.437374246" watchObservedRunningTime="2025-10-14 09:59:21.827483619 +0000 UTC m=+143.524783035" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.838677 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.839408 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.339387269 +0000 UTC m=+144.036686685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.850758 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" event={"ID":"9b71de96-0379-46a7-af50-9b831b50268b","Type":"ContainerStarted","Data":"82cb13a9172fe10b27fc5cab983531d5023212f03bb83d1ddb3bc4a9ddbca172"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.879668 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nt998" podStartSLOduration=6.879655232 podStartE2EDuration="6.879655232s" podCreationTimestamp="2025-10-14 09:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.878399706 +0000 UTC m=+143.575699122" watchObservedRunningTime="2025-10-14 09:59:21.879655232 +0000 UTC m=+143.576954638" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.881093 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" podStartSLOduration=122.881088025 podStartE2EDuration="2m2.881088025s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.828619662 +0000 UTC m=+143.525919078" watchObservedRunningTime="2025-10-14 09:59:21.881088025 +0000 UTC m=+143.578387441" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.892010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rtnz2" event={"ID":"72c6c34c-666c-41e3-8c36-5ac3578ff330","Type":"ContainerStarted","Data":"1b2c1661c795ada4c5e2f5ed502c774a97a718bd20ab6b8c4c29cb3fb6e27da6"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.909954 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" event={"ID":"3f74db80-958b-4799-864f-792892d9903e","Type":"ContainerStarted","Data":"526ecafc9b969f779a15221490a4c637d4e1c2fa40cfad85fa5c16af0ef113b2"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.921577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" event={"ID":"487b5c84-fe72-4b1c-8afa-15681f3d2c34","Type":"ContainerStarted","Data":"9c9b25de33ba6d3222c7f59a7d7734217c020fbdbc7e94ea4125aec135112cef"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.924395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" event={"ID":"b8186130-0e09-455d-92c7-05e4c0af37de","Type":"ContainerStarted","Data":"7d062fbea89203fcad2e63937af43c0ec8b5db10c833977addbd3076459f43c1"} Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.947796 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.947868 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.953618 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqx66" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.969779 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kftst" Oct 14 09:59:21 crc kubenswrapper[4698]: I1014 09:59:21.970511 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:21 crc kubenswrapper[4698]: E1014 09:59:21.998510 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.498482344 +0000 UTC m=+144.195781760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.020435 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.026920 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:22 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:22 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:22 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.026962 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.045222 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7p6xz" podStartSLOduration=124.045180337 podStartE2EDuration="2m4.045180337s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:21.976116847 +0000 UTC m=+143.673416283" watchObservedRunningTime="2025-10-14 09:59:22.045180337 +0000 UTC m=+143.742479753" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.074506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.087568 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.587544142 +0000 UTC m=+144.284843558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.134379 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" podStartSLOduration=124.134364808 podStartE2EDuration="2m4.134364808s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.132952286 +0000 UTC m=+143.830251702" watchObservedRunningTime="2025-10-14 09:59:22.134364808 +0000 UTC m=+143.831664224" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.134798 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sgf88" podStartSLOduration=124.13479298 podStartE2EDuration="2m4.13479298s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.076144367 +0000 UTC m=+143.773443793" watchObservedRunningTime="2025-10-14 09:59:22.13479298 +0000 UTC m=+143.832092396" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.162420 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h8xtm" podStartSLOduration=123.162403592 podStartE2EDuration="2m3.162403592s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.158867968 +0000 UTC m=+143.856167394" watchObservedRunningTime="2025-10-14 09:59:22.162403592 +0000 UTC m=+143.859703008" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.176296 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.176567 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.676512227 +0000 UTC m=+144.373811643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.176661 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.177823 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.677814815 +0000 UTC m=+144.375114231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.185370 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sjgjr" podStartSLOduration=124.185350076 podStartE2EDuration="2m4.185350076s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.184653776 +0000 UTC m=+143.881953202" watchObservedRunningTime="2025-10-14 09:59:22.185350076 +0000 UTC m=+143.882649492" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.210209 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.212397 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vdj26" podStartSLOduration=124.212387261 podStartE2EDuration="2m4.212387261s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.210702072 +0000 UTC m=+143.908001488" watchObservedRunningTime="2025-10-14 09:59:22.212387261 +0000 UTC m=+143.909686667" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.278666 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.279144 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.779111622 +0000 UTC m=+144.476411038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.301254 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ptt5h" podStartSLOduration=124.301217542 podStartE2EDuration="2m4.301217542s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.298657847 +0000 UTC m=+143.995957263" watchObservedRunningTime="2025-10-14 09:59:22.301217542 +0000 UTC m=+143.998516958" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.351823 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xnfjx" podStartSLOduration=124.351798539 podStartE2EDuration="2m4.351798539s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:22.351272453 +0000 UTC m=+144.048571869" watchObservedRunningTime="2025-10-14 09:59:22.351798539 +0000 UTC m=+144.049097955" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.380425 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.380975 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.880959646 +0000 UTC m=+144.578259062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.482327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.483101 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:22.983073287 +0000 UTC m=+144.680372703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.512774 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.513052 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.549947 4698 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5x56z container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.550076 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" podUID="2bc82783-cf82-45df-94d5-60de2f1a0bdf" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.583724 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.584145 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.084115317 +0000 UTC m=+144.781414733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.584631 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-58d6k" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.687287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.687529 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.187489395 +0000 UTC m=+144.884788811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.687630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.688482 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.188472874 +0000 UTC m=+144.885772290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.788639 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.788891 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.288848004 +0000 UTC m=+144.986147410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.789057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.789378 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.289363109 +0000 UTC m=+144.986662525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.890889 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:22 crc kubenswrapper[4698]: E1014 09:59:22.891229 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.391207543 +0000 UTC m=+145.088506959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.939090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" event={"ID":"2bc82783-cf82-45df-94d5-60de2f1a0bdf","Type":"ContainerStarted","Data":"96f0838f93744046bf0fc522871eaa4b24ced44d949ebca3d46fabcc931ff2cf"} Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.944277 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" event={"ID":"4886c701-aad2-4ae4-bb99-0221728df342","Type":"ContainerStarted","Data":"ec7959e34d30b8e9be5b893eb9f16bc2b4ead80a931e312789698c1db0a1f6ee"} Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.948030 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c8pq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.948092 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Oct 14 09:59:22 crc kubenswrapper[4698]: I1014 09:59:22.992422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.000315 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.500290339 +0000 UTC m=+145.197589765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.035013 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:23 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:23 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:23 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.035109 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.094956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.095486 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.59526132 +0000 UTC m=+145.292560736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.095520 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.096052 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.596042133 +0000 UTC m=+145.293341549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.111657 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" podStartSLOduration=124.111632592 podStartE2EDuration="2m4.111632592s" podCreationTimestamp="2025-10-14 09:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:23.10918031 +0000 UTC m=+144.806479736" watchObservedRunningTime="2025-10-14 09:59:23.111632592 +0000 UTC m=+144.808932008" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.196857 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.196998 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.69697623 +0000 UTC m=+145.394275646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.197326 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.197693 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.697683311 +0000 UTC m=+145.394982717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.299199 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.299591 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.799571886 +0000 UTC m=+145.496871302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.372009 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jg26j" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.400694 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.401342 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:23.901316746 +0000 UTC m=+145.598616162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.501486 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.501785 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.001729577 +0000 UTC m=+145.699028993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.502200 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.502628 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.002598613 +0000 UTC m=+145.699898029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.602999 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.603309 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.103291812 +0000 UTC m=+145.800591228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.622691 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s65qn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.704079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.704411 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.204397514 +0000 UTC m=+145.901696930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.726353 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.727275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.734574 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.760213 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.805573 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.805787 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.305732092 +0000 UTC m=+146.003031508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.805968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.806031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.806067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.806128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.806327 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.30631908 +0000 UTC m=+146.003618496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.907331 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.407296588 +0000 UTC m=+146.104596014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907427 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907487 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907922 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.907977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.908022 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:59:23 crc kubenswrapper[4698]: E1014 09:59:23.908047 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.408037479 +0000 UTC m=+146.105336975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.908403 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.954039 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.955169 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.959671 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.963122 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" event={"ID":"487b5c84-fe72-4b1c-8afa-15681f3d2c34","Type":"ContainerStarted","Data":"e2cbb8f7443bcee1e0a621d949c943b6c2a62f2190f1cbc4394fa93518146850"} Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.963161 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" event={"ID":"487b5c84-fe72-4b1c-8afa-15681f3d2c34","Type":"ContainerStarted","Data":"b301ef602d6b77f34a4f6b64444d6c333409bed791f604ae327ac6b5dfeff3c2"} Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.966058 4698 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c8pq6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.966097 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Oct 14 09:59:23 crc kubenswrapper[4698]: I1014 09:59:23.977265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8\") pod \"certified-operators-86gfn\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.008981 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.009081 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.509063329 +0000 UTC m=+146.206362745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.009278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.009562 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.009596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrtt\" (UniqueName: \"kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.009657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.012046 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.512029026 +0000 UTC m=+146.209328442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.022474 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:24 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:24 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:24 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.022522 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.033304 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.039865 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.079846 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.080694 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.102822 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113336 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113612 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113635 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrtt\" (UniqueName: \"kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.113791 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5z8\" (UniqueName: \"kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.113901 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.613874759 +0000 UTC m=+146.311174175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.114221 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.114718 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.151892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrtt\" (UniqueName: \"kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt\") pod \"community-operators-w6fjg\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.224553 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.224604 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.224633 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.224670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5z8\" (UniqueName: \"kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.225433 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.225665 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.233661 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.73364109 +0000 UTC m=+146.430940496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.268805 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.274565 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5z8\" (UniqueName: \"kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8\") pod \"certified-operators-fmbnz\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.328699 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.329189 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.829169057 +0000 UTC m=+146.526468473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.331272 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.349500 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.373878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.431435 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfrc\" (UniqueName: \"kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.431713 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.431823 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.431849 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.432241 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:24.932221886 +0000 UTC m=+146.629521342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.481908 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.534341 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.535113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.535140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.535172 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfrc\" (UniqueName: \"kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.535928 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.035910324 +0000 UTC m=+146.733209730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.536500 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.536704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.561295 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfrc\" (UniqueName: \"kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc\") pod \"community-operators-gld8c\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.636951 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.637408 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.137397177 +0000 UTC m=+146.834696593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.676753 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.682101 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gld8c" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.687408 4698 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.738901 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.739298 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.239280141 +0000 UTC m=+146.936579547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.840853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.841424 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.341413603 +0000 UTC m=+147.038713019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.901013 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.944354 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:24 crc kubenswrapper[4698]: E1014 09:59:24.944746 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.44473198 +0000 UTC m=+147.142031396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.983874 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" containerID="e44571d8c0581614f90fc1c2803d9e04b635a448b30211d6b555fd1ea83951f4" exitCode=0 Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.983936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" event={"ID":"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4","Type":"ContainerDied","Data":"e44571d8c0581614f90fc1c2803d9e04b635a448b30211d6b555fd1ea83951f4"} Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.987338 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerStarted","Data":"1b3e0c0d41be0ecf4d84b3db0dea1468fa34c5fc75d44c76e7dc4d124494e7e4"} Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.992452 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerStarted","Data":"af2097d7e17f7017bb749b1369c056d05ec5d6b50106b77f8b57f759e594c82b"} Oct 14 09:59:24 crc kubenswrapper[4698]: I1014 09:59:24.994010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" event={"ID":"487b5c84-fe72-4b1c-8afa-15681f3d2c34","Type":"ContainerStarted","Data":"ee4cb071ebcfd9bc0edb9b4c4aa86ac72c3c4ed40bda1a16ad8b2d86d4acdc40"} Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.006310 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.025370 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:25 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:25 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:25 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.025424 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.043833 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w8pmp" podStartSLOduration=11.043785241 podStartE2EDuration="11.043785241s" podCreationTimestamp="2025-10-14 09:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:25.042016539 +0000 UTC m=+146.739315955" watchObservedRunningTime="2025-10-14 09:59:25.043785241 +0000 UTC m=+146.741084657" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.046245 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:25 crc kubenswrapper[4698]: E1014 09:59:25.046526 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.546514872 +0000 UTC m=+147.243814288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nthfk" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.099804 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.147518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:25 crc kubenswrapper[4698]: E1014 09:59:25.147820 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 09:59:25.647805869 +0000 UTC m=+147.345105285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.161485 4698 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-14T09:59:24.687420427Z","Handler":null,"Name":""} Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.164110 4698 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.164152 4698 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.249468 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.251932 4698 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.251958 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.295687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nthfk\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.337885 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.350348 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.382715 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.553836 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.554665 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.558698 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.559246 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.563862 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.565260 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.654778 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.654896 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.756154 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.756054 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.756358 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.781522 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.880193 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.881204 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.884136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.944821 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.960324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.960375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhhzg\" (UniqueName: \"kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.960431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:25 crc kubenswrapper[4698]: I1014 09:59:25.969170 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.004905 4698 generic.go:334] "Generic (PLEG): container finished" podID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerID="02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469" exitCode=0 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.004974 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerDied","Data":"02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.006822 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.009117 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerID="3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161" exitCode=0 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.009478 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerDied","Data":"3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.015394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" event={"ID":"c6555f7d-6e37-41d2-8f98-b02aba5270ab","Type":"ContainerStarted","Data":"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.015446 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" event={"ID":"c6555f7d-6e37-41d2-8f98-b02aba5270ab","Type":"ContainerStarted","Data":"1611aa72c8b049bb25aeacbf71e6df36d06fe3345a9f8bbb78c5fbc5a9c8d3fc"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.015552 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.016751 4698 generic.go:334] "Generic (PLEG): container finished" podID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerID="c80364756403e216d42610557d86837104d9bf1f5b7a9728d61ec9d183318cfd" exitCode=0 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.016817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerDied","Data":"c80364756403e216d42610557d86837104d9bf1f5b7a9728d61ec9d183318cfd"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.016833 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerStarted","Data":"c3407ee27d141caad1004875aa78916f33c1a402f582f7ffbcde4a13c7567eb8"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.019948 4698 generic.go:334] "Generic (PLEG): container finished" podID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerID="c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7" exitCode=0 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.020569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerDied","Data":"c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.020593 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerStarted","Data":"eda060176265af6d414056ba9d9f1b43be82c0bea2bbc1bcf957c63a4c8ac914"} Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.025962 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:26 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:26 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:26 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.026011 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.062086 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.062348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhhzg\" (UniqueName: \"kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.062395 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.063383 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.066121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.087981 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" podStartSLOduration=128.087964481 podStartE2EDuration="2m8.087964481s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:26.06275009 +0000 UTC m=+147.760049506" watchObservedRunningTime="2025-10-14 09:59:26.087964481 +0000 UTC m=+147.785263897" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.089405 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhhzg\" (UniqueName: \"kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg\") pod \"redhat-marketplace-cl9gm\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.140241 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.140875 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.144992 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.145148 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.145207 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.200762 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.207820 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 09:59:26 crc kubenswrapper[4698]: W1014 09:59:26.235172 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6d303d83_3763_43b6_932e_e467a8ca39a1.slice/crio-af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc WatchSource:0}: Error finding container af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc: Status 404 returned error can't find the container with id af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.266230 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.266325 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.271613 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.302252 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.302375 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.367254 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.367647 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.367435 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.390943 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.399115 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.469311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.469405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcns\" (UniqueName: \"kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.469475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.469728 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570371 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94g47\" (UniqueName: \"kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47\") pod \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570463 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume\") pod \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570484 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume\") pod \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\" (UID: \"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4\") " Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570730 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fcns\" (UniqueName: \"kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.570784 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.571227 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.571505 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.571845 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" (UID: "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.575353 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" (UID: "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.575790 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47" (OuterVolumeSpecName: "kube-api-access-94g47") pod "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" (UID: "aa6a1a6a-f7f3-402b-9568-89c9415eaaa4"). InnerVolumeSpecName "kube-api-access-94g47". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.588570 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fcns\" (UniqueName: \"kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns\") pod \"redhat-marketplace-mvpk5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.622046 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.665008 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.672866 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94g47\" (UniqueName: \"kubernetes.io/projected/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-kube-api-access-94g47\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.672887 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.672896 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.679560 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 09:59:26 crc kubenswrapper[4698]: W1014 09:59:26.687938 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d2636c_36ca_4957_9ebe_8cc679ca9e01.slice/crio-ccc33948dd7cd862c0a2f5d1491b927db94cf81d3d26dcc84f1c6461e2af7cc5 WatchSource:0}: Error finding container ccc33948dd7cd862c0a2f5d1491b927db94cf81d3d26dcc84f1c6461e2af7cc5: Status 404 returned error can't find the container with id ccc33948dd7cd862c0a2f5d1491b927db94cf81d3d26dcc84f1c6461e2af7cc5 Oct 14 09:59:26 crc kubenswrapper[4698]: W1014 09:59:26.690389 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbb5e1f30_a5d2_4f1e_a6ff_1975d0558409.slice/crio-092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140 WatchSource:0}: Error finding container 092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140: Status 404 returned error can't find the container with id 092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.821408 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 09:59:26 crc kubenswrapper[4698]: W1014 09:59:26.832485 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206eaf53_fb36_4881_a42a_0ae968a8f6d5.slice/crio-4348cd396762790080949ea61c5a889e0eede9335f1c5a24d709f21a863ec6b9 WatchSource:0}: Error finding container 4348cd396762790080949ea61c5a889e0eede9335f1c5a24d709f21a863ec6b9: Status 404 returned error can't find the container with id 4348cd396762790080949ea61c5a889e0eede9335f1c5a24d709f21a863ec6b9 Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.873115 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 09:59:26 crc kubenswrapper[4698]: E1014 09:59:26.873948 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" containerName="collect-profiles" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.873969 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" containerName="collect-profiles" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.874129 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" containerName="collect-profiles" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.875267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.879871 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.883446 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.976902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.976945 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66lgt\" (UniqueName: \"kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.976981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.977020 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.977051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.978207 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:26 crc kubenswrapper[4698]: I1014 09:59:26.989023 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.022912 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:27 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:27 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:27 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.022980 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.054231 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.058326 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerID="f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c" exitCode=0 Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.058424 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerDied","Data":"f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.058473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerStarted","Data":"ccc33948dd7cd862c0a2f5d1491b927db94cf81d3d26dcc84f1c6461e2af7cc5"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.071885 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409","Type":"ContainerStarted","Data":"092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.078727 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerStarted","Data":"4348cd396762790080949ea61c5a889e0eede9335f1c5a24d709f21a863ec6b9"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.080148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66lgt\" (UniqueName: \"kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.080196 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.080221 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.080275 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.080385 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.081933 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.082321 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.084061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.084790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6d303d83-3763-43b6-932e-e467a8ca39a1","Type":"ContainerStarted","Data":"892e8f09dcbb2394761d72fc089e6f113022d758d155e9dfe6baf676154f3aa9"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.084841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6d303d83-3763-43b6-932e-e467a8ca39a1","Type":"ContainerStarted","Data":"af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.089102 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.090128 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.090130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c" event={"ID":"aa6a1a6a-f7f3-402b-9568-89c9415eaaa4","Type":"ContainerDied","Data":"48f088b78239a573bbc42b00299afbfb622447d602d8839fef04c765eaa675a0"} Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.090266 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f088b78239a573bbc42b00299afbfb622447d602d8839fef04c765eaa675a0" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.098167 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.098133251 podStartE2EDuration="2.098133251s" podCreationTimestamp="2025-10-14 09:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:59:27.096853294 +0000 UTC m=+148.794152710" watchObservedRunningTime="2025-10-14 09:59:27.098133251 +0000 UTC m=+148.795432667" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.100375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66lgt\" (UniqueName: \"kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt\") pod \"redhat-operators-2b5sl\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.261509 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.271518 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.271941 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.273131 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.273234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.282819 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.283713 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.407256 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5bk\" (UniqueName: \"kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.407399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.408123 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.460025 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.460058 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.462551 4698 patch_prober.go:28] interesting pod/console-f9d7485db-f47kf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.462606 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f47kf" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.509848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.510298 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.510363 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5bk\" (UniqueName: \"kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.511137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.511392 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.529335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.538800 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5x56z" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.553548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5bk\" (UniqueName: \"kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk\") pod \"redhat-operators-scb7r\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.659527 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.705440 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.705466 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.705495 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.705512 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:27 crc kubenswrapper[4698]: W1014 09:59:27.710262 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f578d84875f206cebf266b4fdc85f065a619f57bac4862d2835ca9b8b0b8a776 WatchSource:0}: Error finding container f578d84875f206cebf266b4fdc85f065a619f57bac4862d2835ca9b8b0b8a776: Status 404 returned error can't find the container with id f578d84875f206cebf266b4fdc85f065a619f57bac4862d2835ca9b8b0b8a776 Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.712661 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.762745 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.763532 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:27 crc kubenswrapper[4698]: I1014 09:59:27.780283 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:27 crc kubenswrapper[4698]: W1014 09:59:27.939615 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-80d85ed5e578974d866f768b16c9d68a4fd7cdcf1f6e810d2afd26bd6abe00a2 WatchSource:0}: Error finding container 80d85ed5e578974d866f768b16c9d68a4fd7cdcf1f6e810d2afd26bd6abe00a2: Status 404 returned error can't find the container with id 80d85ed5e578974d866f768b16c9d68a4fd7cdcf1f6e810d2afd26bd6abe00a2 Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.021708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.026796 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:28 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:28 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:28 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.026845 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.055239 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.104567 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.109106 4698 generic.go:334] "Generic (PLEG): container finished" podID="bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" containerID="c5ed1e40dff0cfe7a318d8e86eac283ef2353c50fa0e925a92e3000e84f056e2" exitCode=0 Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.109182 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409","Type":"ContainerDied","Data":"c5ed1e40dff0cfe7a318d8e86eac283ef2353c50fa0e925a92e3000e84f056e2"} Oct 14 09:59:28 crc kubenswrapper[4698]: W1014 09:59:28.118155 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-bd3799f086229aacfbf442c960d97aaa8b1d1cab048475f1dc52f10dfedb108a WatchSource:0}: Error finding container bd3799f086229aacfbf442c960d97aaa8b1d1cab048475f1dc52f10dfedb108a: Status 404 returned error can't find the container with id bd3799f086229aacfbf442c960d97aaa8b1d1cab048475f1dc52f10dfedb108a Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.120359 4698 generic.go:334] "Generic (PLEG): container finished" podID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerID="7af19986392582641b18a00257b92975e921096c0ef8196ef7b68af11af8dbec" exitCode=0 Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.120450 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerDied","Data":"7af19986392582641b18a00257b92975e921096c0ef8196ef7b68af11af8dbec"} Oct 14 09:59:28 crc kubenswrapper[4698]: W1014 09:59:28.133882 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36580b93_747e_4338_9c0c_dab49837aa61.slice/crio-1ccf066092a8d2b6a8351dc3285a617741fe9077cf021d905348f69002ea4361 WatchSource:0}: Error finding container 1ccf066092a8d2b6a8351dc3285a617741fe9077cf021d905348f69002ea4361: Status 404 returned error can't find the container with id 1ccf066092a8d2b6a8351dc3285a617741fe9077cf021d905348f69002ea4361 Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.150805 4698 generic.go:334] "Generic (PLEG): container finished" podID="6d303d83-3763-43b6-932e-e467a8ca39a1" containerID="892e8f09dcbb2394761d72fc089e6f113022d758d155e9dfe6baf676154f3aa9" exitCode=0 Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.151067 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6d303d83-3763-43b6-932e-e467a8ca39a1","Type":"ContainerDied","Data":"892e8f09dcbb2394761d72fc089e6f113022d758d155e9dfe6baf676154f3aa9"} Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.175717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"66899f2af9cd9d4a8de94f017467ff685f257694f62d219806fbc8856ff040b4"} Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.175909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f578d84875f206cebf266b4fdc85f065a619f57bac4862d2835ca9b8b0b8a776"} Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.195504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"80d85ed5e578974d866f768b16c9d68a4fd7cdcf1f6e810d2afd26bd6abe00a2"} Oct 14 09:59:28 crc kubenswrapper[4698]: I1014 09:59:28.215977 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5x7p" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.025359 4698 patch_prober.go:28] interesting pod/router-default-5444994796-qprfx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 09:59:29 crc kubenswrapper[4698]: [-]has-synced failed: reason withheld Oct 14 09:59:29 crc kubenswrapper[4698]: [+]process-running ok Oct 14 09:59:29 crc kubenswrapper[4698]: healthz check failed Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.025645 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qprfx" podUID="193ddda3-4410-402c-a198-33ff7ea3a740" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.209820 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c89a3074f4a62627c0a93b31c31ea07deaa8a5a751166509d04f5bbb5b6c6a1a"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.210173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bd3799f086229aacfbf442c960d97aaa8b1d1cab048475f1dc52f10dfedb108a"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.221489 4698 generic.go:334] "Generic (PLEG): container finished" podID="36580b93-747e-4338-9c0c-dab49837aa61" containerID="8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a" exitCode=0 Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.221556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerDied","Data":"8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.221583 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerStarted","Data":"1ccf066092a8d2b6a8351dc3285a617741fe9077cf021d905348f69002ea4361"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.238352 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ff49c7d6ffaf280a9ee2f4581a4edf300298114acce48f6ebe22317941581ada"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.239038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.255637 4698 generic.go:334] "Generic (PLEG): container finished" podID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerID="5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4" exitCode=0 Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.256462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerDied","Data":"5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.256514 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerStarted","Data":"7695f519e84cec2f88c2152069c13467a0c82a2422aa570ed9494132e6934d9a"} Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.642487 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.684142 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access\") pod \"6d303d83-3763-43b6-932e-e467a8ca39a1\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.684253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir\") pod \"6d303d83-3763-43b6-932e-e467a8ca39a1\" (UID: \"6d303d83-3763-43b6-932e-e467a8ca39a1\") " Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.684347 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6d303d83-3763-43b6-932e-e467a8ca39a1" (UID: "6d303d83-3763-43b6-932e-e467a8ca39a1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.684613 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d303d83-3763-43b6-932e-e467a8ca39a1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.691281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6d303d83-3763-43b6-932e-e467a8ca39a1" (UID: "6d303d83-3763-43b6-932e-e467a8ca39a1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.747806 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.785833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access\") pod \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.785908 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir\") pod \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\" (UID: \"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409\") " Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.786487 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d303d83-3763-43b6-932e-e467a8ca39a1-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.786534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" (UID: "bb5e1f30-a5d2-4f1e-a6ff-1975d0558409"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.790343 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" (UID: "bb5e1f30-a5d2-4f1e-a6ff-1975d0558409"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.888722 4698 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:29 crc kubenswrapper[4698]: I1014 09:59:29.888752 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb5e1f30-a5d2-4f1e-a6ff-1975d0558409-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.023868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.026522 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qprfx" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.292747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bb5e1f30-a5d2-4f1e-a6ff-1975d0558409","Type":"ContainerDied","Data":"092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140"} Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.292796 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="092117a1226c8595d4b6d0f70e10bde6e9bfde1e27aeb778bb940c2d34239140" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.293156 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.299971 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.301248 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6d303d83-3763-43b6-932e-e467a8ca39a1","Type":"ContainerDied","Data":"af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc"} Oct 14 09:59:30 crc kubenswrapper[4698]: I1014 09:59:30.301325 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af680c08167617d86e40ac4d3b9ea84e6a1285cc16a57a0f31da79b11683a4dc" Oct 14 09:59:32 crc kubenswrapper[4698]: I1014 09:59:32.810961 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nt998" Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.474635 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.479366 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.700858 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.701276 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.701139 4698 patch_prober.go:28] interesting pod/downloads-7954f5f757-7fm6f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Oct 14 09:59:37 crc kubenswrapper[4698]: I1014 09:59:37.702697 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7fm6f" podUID="0bf22386-43f0-4d64-abb0-cdec28434502" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Oct 14 09:59:40 crc kubenswrapper[4698]: I1014 09:59:40.072141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:40 crc kubenswrapper[4698]: I1014 09:59:40.082305 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41f5ac86-35f8-416c-bbfe-1e182975ec5c-metrics-certs\") pod \"network-metrics-daemon-jbpnj\" (UID: \"41f5ac86-35f8-416c-bbfe-1e182975ec5c\") " pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:40 crc kubenswrapper[4698]: I1014 09:59:40.143320 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jbpnj" Oct 14 09:59:45 crc kubenswrapper[4698]: I1014 09:59:45.344402 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 09:59:47 crc kubenswrapper[4698]: I1014 09:59:47.713122 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7fm6f" Oct 14 09:59:50 crc kubenswrapper[4698]: I1014 09:59:50.335620 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jbpnj"] Oct 14 09:59:53 crc kubenswrapper[4698]: I1014 09:59:53.908186 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:59:53 crc kubenswrapper[4698]: I1014 09:59:53.908574 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:59:54 crc kubenswrapper[4698]: E1014 09:59:54.866985 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 09:59:54 crc kubenswrapper[4698]: E1014 09:59:54.867221 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmrtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w6fjg_openshift-marketplace(04793f3a-4ff7-4fab-b3cb-756515510f54): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 09:59:54 crc kubenswrapper[4698]: E1014 09:59:54.868556 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w6fjg" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" Oct 14 09:59:58 crc kubenswrapper[4698]: I1014 09:59:58.036126 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8g7h6" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.144422 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj"] Oct 14 10:00:00 crc kubenswrapper[4698]: E1014 10:00:00.145390 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d303d83-3763-43b6-932e-e467a8ca39a1" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.145413 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d303d83-3763-43b6-932e-e467a8ca39a1" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: E1014 10:00:00.145447 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.145457 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.145598 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5e1f30-a5d2-4f1e-a6ff-1975d0558409" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.145635 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d303d83-3763-43b6-932e-e467a8ca39a1" containerName="pruner" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.146232 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.149145 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.152000 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.154103 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj"] Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.196352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjw7x\" (UniqueName: \"kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.196415 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.196438 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.297599 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjw7x\" (UniqueName: \"kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.297696 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.297732 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.298745 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.303911 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.323715 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjw7x\" (UniqueName: \"kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x\") pod \"collect-profiles-29340600-6tccj\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:00 crc kubenswrapper[4698]: I1014 10:00:00.474445 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:03 crc kubenswrapper[4698]: E1014 10:00:03.262459 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 10:00:03 crc kubenswrapper[4698]: E1014 10:00:03.262691 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn5bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-scb7r_openshift-marketplace(60bf1aee-2ab1-4378-af5d-362bbe403adf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:03 crc kubenswrapper[4698]: E1014 10:00:03.264143 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-scb7r" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.049052 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.049244 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66lgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2b5sl_openshift-marketplace(36580b93-747e-4338-9c0c-dab49837aa61): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.050412 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2b5sl" podUID="36580b93-747e-4338-9c0c-dab49837aa61" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.098397 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.099324 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mvpk5_openshift-marketplace(206eaf53-fb36-4881-a42a-0ae968a8f6d5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:04 crc kubenswrapper[4698]: E1014 10:00:04.101606 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mvpk5" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.432024 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mvpk5" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.432356 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-scb7r" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.432484 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2b5sl" podUID="36580b93-747e-4338-9c0c-dab49837aa61" Oct 14 10:00:05 crc kubenswrapper[4698]: W1014 10:00:05.434548 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f5ac86_35f8_416c_bbfe_1e182975ec5c.slice/crio-a103644efac490be01fb82b8945663b73cfb6b0766caf32f5e7147c4e4c7cc4a WatchSource:0}: Error finding container a103644efac490be01fb82b8945663b73cfb6b0766caf32f5e7147c4e4c7cc4a: Status 404 returned error can't find the container with id a103644efac490be01fb82b8945663b73cfb6b0766caf32f5e7147c4e4c7cc4a Oct 14 10:00:05 crc kubenswrapper[4698]: I1014 10:00:05.539158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" event={"ID":"41f5ac86-35f8-416c-bbfe-1e182975ec5c","Type":"ContainerStarted","Data":"a103644efac490be01fb82b8945663b73cfb6b0766caf32f5e7147c4e4c7cc4a"} Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.602945 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.603121 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vcl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-86gfn_openshift-marketplace(d1e0097e-6470-4dd4-b86c-e5f1cecf6759): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.604324 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-86gfn" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" Oct 14 10:00:05 crc kubenswrapper[4698]: I1014 10:00:05.675069 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj"] Oct 14 10:00:05 crc kubenswrapper[4698]: W1014 10:00:05.685999 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121bbec2_1aed_4e03_b35c_1c93b5dbddd2.slice/crio-ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1 WatchSource:0}: Error finding container ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1: Status 404 returned error can't find the container with id ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1 Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.698032 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.698182 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhhzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cl9gm_openshift-marketplace(97d2636c-36ca-4957-9ebe-8cc679ca9e01): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.699295 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cl9gm" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.773253 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.773449 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssfrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gld8c_openshift-marketplace(062fcf04-080a-4aaf-9ca0-0bb57f1ff97d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.774733 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gld8c" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.996113 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.996275 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j5z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fmbnz_openshift-marketplace(1884ea04-91e0-48d5-aa12-ff1375921a10): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:00:05 crc kubenswrapper[4698]: E1014 10:00:05.998315 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fmbnz" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.548551 4698 generic.go:334] "Generic (PLEG): container finished" podID="121bbec2-1aed-4e03-b35c-1c93b5dbddd2" containerID="41d54af16289f8cafe6cb08a4797c7a589689d8568aeb4fbd04d8a1505495907" exitCode=0 Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.548676 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" event={"ID":"121bbec2-1aed-4e03-b35c-1c93b5dbddd2","Type":"ContainerDied","Data":"41d54af16289f8cafe6cb08a4797c7a589689d8568aeb4fbd04d8a1505495907"} Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.548746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" event={"ID":"121bbec2-1aed-4e03-b35c-1c93b5dbddd2","Type":"ContainerStarted","Data":"ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1"} Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.552532 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" event={"ID":"41f5ac86-35f8-416c-bbfe-1e182975ec5c","Type":"ContainerStarted","Data":"8878ca024dae05ff20be25821050805fc38a7f3278b2afba5c2ee2efd38e9101"} Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.552585 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jbpnj" event={"ID":"41f5ac86-35f8-416c-bbfe-1e182975ec5c","Type":"ContainerStarted","Data":"22d6ca8f3defd73124329e6e2225a802cb7b37be3c669e641f6a11a2d28b41af"} Oct 14 10:00:06 crc kubenswrapper[4698]: I1014 10:00:06.634117 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jbpnj" podStartSLOduration=168.634092446 podStartE2EDuration="2m48.634092446s" podCreationTimestamp="2025-10-14 09:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:00:06.628611345 +0000 UTC m=+188.325910791" watchObservedRunningTime="2025-10-14 10:00:06.634092446 +0000 UTC m=+188.331391872" Oct 14 10:00:06 crc kubenswrapper[4698]: E1014 10:00:06.639503 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cl9gm" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" Oct 14 10:00:06 crc kubenswrapper[4698]: E1014 10:00:06.639711 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-86gfn" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" Oct 14 10:00:06 crc kubenswrapper[4698]: E1014 10:00:06.650456 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fmbnz" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" Oct 14 10:00:06 crc kubenswrapper[4698]: E1014 10:00:06.654097 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gld8c" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.280058 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.558233 4698 generic.go:334] "Generic (PLEG): container finished" podID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerID="6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e" exitCode=0 Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.558468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerDied","Data":"6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e"} Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.740338 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.793314 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjw7x\" (UniqueName: \"kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x\") pod \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.793431 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume\") pod \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.793473 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume\") pod \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\" (UID: \"121bbec2-1aed-4e03-b35c-1c93b5dbddd2\") " Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.795563 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume" (OuterVolumeSpecName: "config-volume") pod "121bbec2-1aed-4e03-b35c-1c93b5dbddd2" (UID: "121bbec2-1aed-4e03-b35c-1c93b5dbddd2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.798915 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x" (OuterVolumeSpecName: "kube-api-access-jjw7x") pod "121bbec2-1aed-4e03-b35c-1c93b5dbddd2" (UID: "121bbec2-1aed-4e03-b35c-1c93b5dbddd2"). InnerVolumeSpecName "kube-api-access-jjw7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.799041 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "121bbec2-1aed-4e03-b35c-1c93b5dbddd2" (UID: "121bbec2-1aed-4e03-b35c-1c93b5dbddd2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.894704 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjw7x\" (UniqueName: \"kubernetes.io/projected/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-kube-api-access-jjw7x\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.894806 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:07 crc kubenswrapper[4698]: I1014 10:00:07.894819 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/121bbec2-1aed-4e03-b35c-1c93b5dbddd2-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:08 crc kubenswrapper[4698]: I1014 10:00:08.568316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" event={"ID":"121bbec2-1aed-4e03-b35c-1c93b5dbddd2","Type":"ContainerDied","Data":"ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1"} Oct 14 10:00:08 crc kubenswrapper[4698]: I1014 10:00:08.568754 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc1452685ffdbd57c26fc63c0ac689be162bd1c34740ba57133f64e01c318b1" Oct 14 10:00:08 crc kubenswrapper[4698]: I1014 10:00:08.568375 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj" Oct 14 10:00:09 crc kubenswrapper[4698]: I1014 10:00:09.577845 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerStarted","Data":"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290"} Oct 14 10:00:09 crc kubenswrapper[4698]: I1014 10:00:09.613109 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w6fjg" podStartSLOduration=4.287146546 podStartE2EDuration="46.613078833s" podCreationTimestamp="2025-10-14 09:59:23 +0000 UTC" firstStartedPulling="2025-10-14 09:59:26.00627604 +0000 UTC m=+147.703575496" lastFinishedPulling="2025-10-14 10:00:08.332208337 +0000 UTC m=+190.029507783" observedRunningTime="2025-10-14 10:00:09.610127996 +0000 UTC m=+191.307427472" watchObservedRunningTime="2025-10-14 10:00:09.613078833 +0000 UTC m=+191.310378289" Oct 14 10:00:14 crc kubenswrapper[4698]: I1014 10:00:14.269879 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:00:14 crc kubenswrapper[4698]: I1014 10:00:14.270246 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:00:14 crc kubenswrapper[4698]: I1014 10:00:14.406185 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:00:14 crc kubenswrapper[4698]: I1014 10:00:14.668015 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:00:18 crc kubenswrapper[4698]: I1014 10:00:18.630804 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerStarted","Data":"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad"} Oct 14 10:00:19 crc kubenswrapper[4698]: I1014 10:00:19.645695 4698 generic.go:334] "Generic (PLEG): container finished" podID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerID="67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad" exitCode=0 Oct 14 10:00:19 crc kubenswrapper[4698]: I1014 10:00:19.645868 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerDied","Data":"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad"} Oct 14 10:00:19 crc kubenswrapper[4698]: I1014 10:00:19.652954 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerStarted","Data":"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.662722 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerStarted","Data":"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.664702 4698 generic.go:334] "Generic (PLEG): container finished" podID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerID="21112852e5b6b951a9da937fe47c3a3bce1e3cd97137af8a82f657a954186ad6" exitCode=0 Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.664791 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerDied","Data":"21112852e5b6b951a9da937fe47c3a3bce1e3cd97137af8a82f657a954186ad6"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.667314 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerStarted","Data":"dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.669519 4698 generic.go:334] "Generic (PLEG): container finished" podID="36580b93-747e-4338-9c0c-dab49837aa61" containerID="3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df" exitCode=0 Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.669574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerDied","Data":"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.673671 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerID="1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308" exitCode=0 Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.673715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerDied","Data":"1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308"} Oct 14 10:00:20 crc kubenswrapper[4698]: I1014 10:00:20.692453 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scb7r" podStartSLOduration=2.689889535 podStartE2EDuration="53.692430665s" podCreationTimestamp="2025-10-14 09:59:27 +0000 UTC" firstStartedPulling="2025-10-14 09:59:29.275066574 +0000 UTC m=+150.972365990" lastFinishedPulling="2025-10-14 10:00:20.277607694 +0000 UTC m=+201.974907120" observedRunningTime="2025-10-14 10:00:20.689583282 +0000 UTC m=+202.386882758" watchObservedRunningTime="2025-10-14 10:00:20.692430665 +0000 UTC m=+202.389730091" Oct 14 10:00:20 crc kubenswrapper[4698]: E1014 10:00:20.914715 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062fcf04_080a_4aaf_9ca0_0bb57f1ff97d.slice/crio-dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062fcf04_080a_4aaf_9ca0_0bb57f1ff97d.slice/crio-conmon-dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.679997 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerID="09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c" exitCode=0 Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.680222 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerDied","Data":"09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c"} Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.683314 4698 generic.go:334] "Generic (PLEG): container finished" podID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerID="92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef" exitCode=0 Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.683397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerDied","Data":"92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef"} Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.686641 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerStarted","Data":"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5"} Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.688397 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerStarted","Data":"ad43f94379dbbc4a9a59944d066616006b6983c3a9d0cf43b4faf727aa2a9118"} Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.689652 4698 generic.go:334] "Generic (PLEG): container finished" podID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerID="dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2" exitCode=0 Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.689691 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerDied","Data":"dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2"} Oct 14 10:00:21 crc kubenswrapper[4698]: I1014 10:00:21.724326 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86gfn" podStartSLOduration=3.544067134 podStartE2EDuration="58.724311121s" podCreationTimestamp="2025-10-14 09:59:23 +0000 UTC" firstStartedPulling="2025-10-14 09:59:26.011708819 +0000 UTC m=+147.709008275" lastFinishedPulling="2025-10-14 10:00:21.191952836 +0000 UTC m=+202.889252262" observedRunningTime="2025-10-14 10:00:21.72291285 +0000 UTC m=+203.420212316" watchObservedRunningTime="2025-10-14 10:00:21.724311121 +0000 UTC m=+203.421610537" Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.696279 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerStarted","Data":"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8"} Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.698552 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerStarted","Data":"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6"} Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.702335 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerStarted","Data":"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa"} Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.704460 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerStarted","Data":"1f4355abd1804cb0bfa938de87b7a20a55258c94cad64265dd9bdc87a6ff1904"} Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.719912 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmbnz" podStartSLOduration=2.368897877 podStartE2EDuration="58.719898481s" podCreationTimestamp="2025-10-14 09:59:24 +0000 UTC" firstStartedPulling="2025-10-14 09:59:26.025924507 +0000 UTC m=+147.723223933" lastFinishedPulling="2025-10-14 10:00:22.376925111 +0000 UTC m=+204.074224537" observedRunningTime="2025-10-14 10:00:22.716569353 +0000 UTC m=+204.413868769" watchObservedRunningTime="2025-10-14 10:00:22.719898481 +0000 UTC m=+204.417197897" Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.720702 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvpk5" podStartSLOduration=3.628721546 podStartE2EDuration="56.720697604s" podCreationTimestamp="2025-10-14 09:59:26 +0000 UTC" firstStartedPulling="2025-10-14 09:59:28.123969683 +0000 UTC m=+149.821269099" lastFinishedPulling="2025-10-14 10:00:21.215945701 +0000 UTC m=+202.913245157" observedRunningTime="2025-10-14 10:00:21.783278924 +0000 UTC m=+203.480578340" watchObservedRunningTime="2025-10-14 10:00:22.720697604 +0000 UTC m=+204.417997020" Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.734433 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cl9gm" podStartSLOduration=2.438982348 podStartE2EDuration="57.734415767s" podCreationTimestamp="2025-10-14 09:59:25 +0000 UTC" firstStartedPulling="2025-10-14 09:59:27.071688694 +0000 UTC m=+148.768988110" lastFinishedPulling="2025-10-14 10:00:22.367122103 +0000 UTC m=+204.064421529" observedRunningTime="2025-10-14 10:00:22.733734487 +0000 UTC m=+204.431033903" watchObservedRunningTime="2025-10-14 10:00:22.734415767 +0000 UTC m=+204.431715183" Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.753311 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2b5sl" podStartSLOduration=4.439369302 podStartE2EDuration="56.753291932s" podCreationTimestamp="2025-10-14 09:59:26 +0000 UTC" firstStartedPulling="2025-10-14 09:59:29.233488202 +0000 UTC m=+150.930787628" lastFinishedPulling="2025-10-14 10:00:21.547410842 +0000 UTC m=+203.244710258" observedRunningTime="2025-10-14 10:00:22.7498175 +0000 UTC m=+204.447116946" watchObservedRunningTime="2025-10-14 10:00:22.753291932 +0000 UTC m=+204.450591348" Oct 14 10:00:22 crc kubenswrapper[4698]: I1014 10:00:22.773349 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gld8c" podStartSLOduration=2.591216781 podStartE2EDuration="58.773333891s" podCreationTimestamp="2025-10-14 09:59:24 +0000 UTC" firstStartedPulling="2025-10-14 09:59:26.018737876 +0000 UTC m=+147.716037292" lastFinishedPulling="2025-10-14 10:00:22.200854976 +0000 UTC m=+203.898154402" observedRunningTime="2025-10-14 10:00:22.768756337 +0000 UTC m=+204.466055753" watchObservedRunningTime="2025-10-14 10:00:22.773333891 +0000 UTC m=+204.470633307" Oct 14 10:00:23 crc kubenswrapper[4698]: I1014 10:00:23.907646 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:00:23 crc kubenswrapper[4698]: I1014 10:00:23.907947 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:00:23 crc kubenswrapper[4698]: I1014 10:00:23.907990 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:00:23 crc kubenswrapper[4698]: I1014 10:00:23.908492 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:00:23 crc kubenswrapper[4698]: I1014 10:00:23.908575 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1" gracePeriod=600 Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.040922 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.041266 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.092239 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.482820 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.482883 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.683969 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.684043 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.715925 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1" exitCode=0 Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.715980 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1"} Oct 14 10:00:24 crc kubenswrapper[4698]: I1014 10:00:24.716619 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b"} Oct 14 10:00:25 crc kubenswrapper[4698]: I1014 10:00:25.517388 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fmbnz" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="registry-server" probeResult="failure" output=< Oct 14 10:00:25 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:00:25 crc kubenswrapper[4698]: > Oct 14 10:00:25 crc kubenswrapper[4698]: I1014 10:00:25.741796 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gld8c" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="registry-server" probeResult="failure" output=< Oct 14 10:00:25 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:00:25 crc kubenswrapper[4698]: > Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.208980 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.209729 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.285981 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.622684 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.623972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.667387 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:26 crc kubenswrapper[4698]: I1014 10:00:26.763961 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.274357 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.274446 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.343386 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.660602 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.662447 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.707355 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.769680 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.774890 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:00:27 crc kubenswrapper[4698]: I1014 10:00:27.811084 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:28 crc kubenswrapper[4698]: I1014 10:00:28.435157 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 10:00:29 crc kubenswrapper[4698]: I1014 10:00:29.744866 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-scb7r" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="registry-server" containerID="cri-o://b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712" gracePeriod=2 Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.740119 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.752648 4698 generic.go:334] "Generic (PLEG): container finished" podID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerID="b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712" exitCode=0 Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.752708 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerDied","Data":"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712"} Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.752747 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scb7r" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.752758 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scb7r" event={"ID":"60bf1aee-2ab1-4378-af5d-362bbe403adf","Type":"ContainerDied","Data":"7695f519e84cec2f88c2152069c13467a0c82a2422aa570ed9494132e6934d9a"} Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.752799 4698 scope.go:117] "RemoveContainer" containerID="b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.770685 4698 scope.go:117] "RemoveContainer" containerID="67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.799872 4698 scope.go:117] "RemoveContainer" containerID="5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.832202 4698 scope.go:117] "RemoveContainer" containerID="b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712" Oct 14 10:00:30 crc kubenswrapper[4698]: E1014 10:00:30.832839 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712\": container with ID starting with b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712 not found: ID does not exist" containerID="b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.832880 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712"} err="failed to get container status \"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712\": rpc error: code = NotFound desc = could not find container \"b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712\": container with ID starting with b231164c35321b86255a11003a05b9d68729786ce64fb2135d0643df0a7d9712 not found: ID does not exist" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.832908 4698 scope.go:117] "RemoveContainer" containerID="67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad" Oct 14 10:00:30 crc kubenswrapper[4698]: E1014 10:00:30.833168 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad\": container with ID starting with 67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad not found: ID does not exist" containerID="67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.833197 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad"} err="failed to get container status \"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad\": rpc error: code = NotFound desc = could not find container \"67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad\": container with ID starting with 67a28b91f5a21ea6635f4a011cbf7718385bc9c6b86d629e2cfa32e26afec5ad not found: ID does not exist" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.833215 4698 scope.go:117] "RemoveContainer" containerID="5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4" Oct 14 10:00:30 crc kubenswrapper[4698]: E1014 10:00:30.833462 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4\": container with ID starting with 5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4 not found: ID does not exist" containerID="5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.833489 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4"} err="failed to get container status \"5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4\": rpc error: code = NotFound desc = could not find container \"5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4\": container with ID starting with 5e000c0ba50debae060ca33816dde852e55902c2c90f2cce55ce0a2364e2eed4 not found: ID does not exist" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.881218 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities\") pod \"60bf1aee-2ab1-4378-af5d-362bbe403adf\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.881267 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content\") pod \"60bf1aee-2ab1-4378-af5d-362bbe403adf\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.881301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn5bk\" (UniqueName: \"kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk\") pod \"60bf1aee-2ab1-4378-af5d-362bbe403adf\" (UID: \"60bf1aee-2ab1-4378-af5d-362bbe403adf\") " Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.882302 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities" (OuterVolumeSpecName: "utilities") pod "60bf1aee-2ab1-4378-af5d-362bbe403adf" (UID: "60bf1aee-2ab1-4378-af5d-362bbe403adf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.887991 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk" (OuterVolumeSpecName: "kube-api-access-qn5bk") pod "60bf1aee-2ab1-4378-af5d-362bbe403adf" (UID: "60bf1aee-2ab1-4378-af5d-362bbe403adf"). InnerVolumeSpecName "kube-api-access-qn5bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.962833 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60bf1aee-2ab1-4378-af5d-362bbe403adf" (UID: "60bf1aee-2ab1-4378-af5d-362bbe403adf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.983076 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.983119 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bf1aee-2ab1-4378-af5d-362bbe403adf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:30 crc kubenswrapper[4698]: I1014 10:00:30.983134 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn5bk\" (UniqueName: \"kubernetes.io/projected/60bf1aee-2ab1-4378-af5d-362bbe403adf-kube-api-access-qn5bk\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.038003 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.038310 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvpk5" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="registry-server" containerID="cri-o://ad43f94379dbbc4a9a59944d066616006b6983c3a9d0cf43b4faf727aa2a9118" gracePeriod=2 Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.069965 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.073627 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-scb7r"] Oct 14 10:00:31 crc kubenswrapper[4698]: E1014 10:00:31.089696 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60bf1aee_2ab1_4378_af5d_362bbe403adf.slice\": RecentStats: unable to find data in memory cache]" Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.760806 4698 generic.go:334] "Generic (PLEG): container finished" podID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerID="ad43f94379dbbc4a9a59944d066616006b6983c3a9d0cf43b4faf727aa2a9118" exitCode=0 Oct 14 10:00:31 crc kubenswrapper[4698]: I1014 10:00:31.760898 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerDied","Data":"ad43f94379dbbc4a9a59944d066616006b6983c3a9d0cf43b4faf727aa2a9118"} Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.323033 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.501786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fcns\" (UniqueName: \"kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns\") pod \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.501890 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content\") pod \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.501942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities\") pod \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\" (UID: \"206eaf53-fb36-4881-a42a-0ae968a8f6d5\") " Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.504664 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities" (OuterVolumeSpecName: "utilities") pod "206eaf53-fb36-4881-a42a-0ae968a8f6d5" (UID: "206eaf53-fb36-4881-a42a-0ae968a8f6d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.517331 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "206eaf53-fb36-4881-a42a-0ae968a8f6d5" (UID: "206eaf53-fb36-4881-a42a-0ae968a8f6d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.527377 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns" (OuterVolumeSpecName: "kube-api-access-8fcns") pod "206eaf53-fb36-4881-a42a-0ae968a8f6d5" (UID: "206eaf53-fb36-4881-a42a-0ae968a8f6d5"). InnerVolumeSpecName "kube-api-access-8fcns". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.603921 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fcns\" (UniqueName: \"kubernetes.io/projected/206eaf53-fb36-4881-a42a-0ae968a8f6d5-kube-api-access-8fcns\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.603958 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.603969 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206eaf53-fb36-4881-a42a-0ae968a8f6d5-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.768085 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvpk5" event={"ID":"206eaf53-fb36-4881-a42a-0ae968a8f6d5","Type":"ContainerDied","Data":"4348cd396762790080949ea61c5a889e0eede9335f1c5a24d709f21a863ec6b9"} Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.768137 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvpk5" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.768168 4698 scope.go:117] "RemoveContainer" containerID="ad43f94379dbbc4a9a59944d066616006b6983c3a9d0cf43b4faf727aa2a9118" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.804873 4698 scope.go:117] "RemoveContainer" containerID="21112852e5b6b951a9da937fe47c3a3bce1e3cd97137af8a82f657a954186ad6" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.820780 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.823818 4698 scope.go:117] "RemoveContainer" containerID="7af19986392582641b18a00257b92975e921096c0ef8196ef7b68af11af8dbec" Oct 14 10:00:32 crc kubenswrapper[4698]: I1014 10:00:32.826465 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvpk5"] Oct 14 10:00:33 crc kubenswrapper[4698]: I1014 10:00:33.022661 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" path="/var/lib/kubelet/pods/206eaf53-fb36-4881-a42a-0ae968a8f6d5/volumes" Oct 14 10:00:33 crc kubenswrapper[4698]: I1014 10:00:33.023617 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" path="/var/lib/kubelet/pods/60bf1aee-2ab1-4378-af5d-362bbe403adf/volumes" Oct 14 10:00:34 crc kubenswrapper[4698]: I1014 10:00:34.085579 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:00:34 crc kubenswrapper[4698]: I1014 10:00:34.525656 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:34 crc kubenswrapper[4698]: I1014 10:00:34.565623 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:34 crc kubenswrapper[4698]: I1014 10:00:34.724182 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:34 crc kubenswrapper[4698]: I1014 10:00:34.761799 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:36 crc kubenswrapper[4698]: I1014 10:00:36.443132 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 10:00:36 crc kubenswrapper[4698]: I1014 10:00:36.833777 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 10:00:36 crc kubenswrapper[4698]: I1014 10:00:36.834288 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fmbnz" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="registry-server" containerID="cri-o://650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8" gracePeriod=2 Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.178090 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.360923 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content\") pod \"1884ea04-91e0-48d5-aa12-ff1375921a10\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.361258 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities\") pod \"1884ea04-91e0-48d5-aa12-ff1375921a10\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.361280 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j5z8\" (UniqueName: \"kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8\") pod \"1884ea04-91e0-48d5-aa12-ff1375921a10\" (UID: \"1884ea04-91e0-48d5-aa12-ff1375921a10\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.362139 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities" (OuterVolumeSpecName: "utilities") pod "1884ea04-91e0-48d5-aa12-ff1375921a10" (UID: "1884ea04-91e0-48d5-aa12-ff1375921a10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.365875 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8" (OuterVolumeSpecName: "kube-api-access-5j5z8") pod "1884ea04-91e0-48d5-aa12-ff1375921a10" (UID: "1884ea04-91e0-48d5-aa12-ff1375921a10"). InnerVolumeSpecName "kube-api-access-5j5z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.411361 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1884ea04-91e0-48d5-aa12-ff1375921a10" (UID: "1884ea04-91e0-48d5-aa12-ff1375921a10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.439314 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.439736 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gld8c" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="registry-server" containerID="cri-o://1f4355abd1804cb0bfa938de87b7a20a55258c94cad64265dd9bdc87a6ff1904" gracePeriod=2 Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.462637 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.463536 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j5z8\" (UniqueName: \"kubernetes.io/projected/1884ea04-91e0-48d5-aa12-ff1375921a10-kube-api-access-5j5z8\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.463633 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1884ea04-91e0-48d5-aa12-ff1375921a10-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.794581 4698 generic.go:334] "Generic (PLEG): container finished" podID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerID="1f4355abd1804cb0bfa938de87b7a20a55258c94cad64265dd9bdc87a6ff1904" exitCode=0 Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.794719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerDied","Data":"1f4355abd1804cb0bfa938de87b7a20a55258c94cad64265dd9bdc87a6ff1904"} Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.797309 4698 generic.go:334] "Generic (PLEG): container finished" podID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerID="650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8" exitCode=0 Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.797351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerDied","Data":"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8"} Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.797379 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmbnz" event={"ID":"1884ea04-91e0-48d5-aa12-ff1375921a10","Type":"ContainerDied","Data":"eda060176265af6d414056ba9d9f1b43be82c0bea2bbc1bcf957c63a4c8ac914"} Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.797396 4698 scope.go:117] "RemoveContainer" containerID="650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.797462 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmbnz" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.808899 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.829308 4698 scope.go:117] "RemoveContainer" containerID="92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.848005 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.848081 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fmbnz"] Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.858701 4698 scope.go:117] "RemoveContainer" containerID="c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.870735 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content\") pod \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.870953 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssfrc\" (UniqueName: \"kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc\") pod \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.871000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities\") pod \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\" (UID: \"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d\") " Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.871979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities" (OuterVolumeSpecName: "utilities") pod "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" (UID: "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.876854 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc" (OuterVolumeSpecName: "kube-api-access-ssfrc") pod "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" (UID: "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d"). InnerVolumeSpecName "kube-api-access-ssfrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.879913 4698 scope.go:117] "RemoveContainer" containerID="650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8" Oct 14 10:00:37 crc kubenswrapper[4698]: E1014 10:00:37.880633 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8\": container with ID starting with 650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8 not found: ID does not exist" containerID="650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.880695 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8"} err="failed to get container status \"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8\": rpc error: code = NotFound desc = could not find container \"650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8\": container with ID starting with 650029cd1dafe2ff907f31486a0e8347f308a82f0a11b5fbd2192e1bbaba8aa8 not found: ID does not exist" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.880728 4698 scope.go:117] "RemoveContainer" containerID="92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef" Oct 14 10:00:37 crc kubenswrapper[4698]: E1014 10:00:37.881473 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef\": container with ID starting with 92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef not found: ID does not exist" containerID="92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.881518 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef"} err="failed to get container status \"92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef\": rpc error: code = NotFound desc = could not find container \"92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef\": container with ID starting with 92375e5a0a59f7d2621d3d034f440e961896d0edf926cabd1219ec2d36cff2ef not found: ID does not exist" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.881554 4698 scope.go:117] "RemoveContainer" containerID="c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7" Oct 14 10:00:37 crc kubenswrapper[4698]: E1014 10:00:37.882149 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7\": container with ID starting with c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7 not found: ID does not exist" containerID="c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.882190 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7"} err="failed to get container status \"c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7\": rpc error: code = NotFound desc = could not find container \"c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7\": container with ID starting with c5c7c858f121c79444d0f85469fab0ad94212f484a59345f8c3ba6967b5f35d7 not found: ID does not exist" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.931834 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" (UID: "062fcf04-080a-4aaf-9ca0-0bb57f1ff97d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.971706 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.971748 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssfrc\" (UniqueName: \"kubernetes.io/projected/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-kube-api-access-ssfrc\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:37 crc kubenswrapper[4698]: I1014 10:00:37.971780 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.807961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gld8c" event={"ID":"062fcf04-080a-4aaf-9ca0-0bb57f1ff97d","Type":"ContainerDied","Data":"c3407ee27d141caad1004875aa78916f33c1a402f582f7ffbcde4a13c7567eb8"} Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.808015 4698 scope.go:117] "RemoveContainer" containerID="1f4355abd1804cb0bfa938de87b7a20a55258c94cad64265dd9bdc87a6ff1904" Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.808127 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gld8c" Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.823167 4698 scope.go:117] "RemoveContainer" containerID="dc464456d082fb4086d0befc43629f0297452bbbddef907e12fb60ea8538cdb2" Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.845884 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.849277 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gld8c"] Oct 14 10:00:38 crc kubenswrapper[4698]: I1014 10:00:38.856263 4698 scope.go:117] "RemoveContainer" containerID="c80364756403e216d42610557d86837104d9bf1f5b7a9728d61ec9d183318cfd" Oct 14 10:00:39 crc kubenswrapper[4698]: I1014 10:00:39.025080 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" path="/var/lib/kubelet/pods/062fcf04-080a-4aaf-9ca0-0bb57f1ff97d/volumes" Oct 14 10:00:39 crc kubenswrapper[4698]: I1014 10:00:39.027794 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" path="/var/lib/kubelet/pods/1884ea04-91e0-48d5-aa12-ff1375921a10/volumes" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.483489 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" podUID="5eed092c-2837-48df-8eb4-8759235349b6" containerName="oauth-openshift" containerID="cri-o://dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4" gracePeriod=15 Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.805118 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.844628 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66c58d94fc-lpbpr"] Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845049 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845084 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845114 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845130 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845152 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845170 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845192 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845209 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845231 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845245 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845265 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845282 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845301 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121bbec2-1aed-4e03-b35c-1c93b5dbddd2" containerName="collect-profiles" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845319 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="121bbec2-1aed-4e03-b35c-1c93b5dbddd2" containerName="collect-profiles" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845337 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845352 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="extract-content" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845376 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845392 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845416 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845431 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845459 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845475 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845498 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eed092c-2837-48df-8eb4-8759235349b6" containerName="oauth-openshift" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845513 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eed092c-2837-48df-8eb4-8759235349b6" containerName="oauth-openshift" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845538 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845554 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.845574 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845590 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="extract-utilities" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.845822 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1884ea04-91e0-48d5-aa12-ff1375921a10" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846024 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eed092c-2837-48df-8eb4-8759235349b6" containerName="oauth-openshift" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846051 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="206eaf53-fb36-4881-a42a-0ae968a8f6d5" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846070 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bf1aee-2ab1-4378-af5d-362bbe403adf" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846090 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="062fcf04-080a-4aaf-9ca0-0bb57f1ff97d" containerName="registry-server" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846119 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="121bbec2-1aed-4e03-b35c-1c93b5dbddd2" containerName="collect-profiles" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.846819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.869503 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66c58d94fc-lpbpr"] Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.932532 4698 generic.go:334] "Generic (PLEG): container finished" podID="5eed092c-2837-48df-8eb4-8759235349b6" containerID="dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4" exitCode=0 Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.932581 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" event={"ID":"5eed092c-2837-48df-8eb4-8759235349b6","Type":"ContainerDied","Data":"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4"} Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.932586 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.932610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqg88" event={"ID":"5eed092c-2837-48df-8eb4-8759235349b6","Type":"ContainerDied","Data":"cc16641dc5082e4efa0f7365a32a696363b9b97f1a69878d59e1c8c9fc61f74a"} Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.932622 4698 scope.go:117] "RemoveContainer" containerID="dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.950713 4698 scope.go:117] "RemoveContainer" containerID="dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4" Oct 14 10:01:01 crc kubenswrapper[4698]: E1014 10:01:01.951028 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4\": container with ID starting with dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4 not found: ID does not exist" containerID="dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.951058 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4"} err="failed to get container status \"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4\": rpc error: code = NotFound desc = could not find container \"dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4\": container with ID starting with dfd8aa294be8ea39a8e5dab884ececd482de4b00af9774137229de1408a473e4 not found: ID does not exist" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989562 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989580 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989628 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989651 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989676 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989694 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989709 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989742 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989782 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989815 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthwp\" (UniqueName: \"kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp\") pod \"5eed092c-2837-48df-8eb4-8759235349b6\" (UID: \"5eed092c-2837-48df-8eb4-8759235349b6\") " Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-dir\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989955 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.989998 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990014 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990030 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtc8h\" (UniqueName: \"kubernetes.io/projected/b930ad82-3c54-40c9-b332-70aae927bcbb-kube-api-access-vtc8h\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-error\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-session\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990490 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-policies\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990518 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990551 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-login\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990598 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.990644 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.991585 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.992077 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.996182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp" (OuterVolumeSpecName: "kube-api-access-tthwp") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "kube-api-access-tthwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:01 crc kubenswrapper[4698]: I1014 10:01:01.996239 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.001203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.001360 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.003268 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.006242 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.006243 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.006579 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.006848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5eed092c-2837-48df-8eb4-8759235349b6" (UID: "5eed092c-2837-48df-8eb4-8759235349b6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-dir\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092268 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-dir\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092314 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092440 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092641 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtc8h\" (UniqueName: \"kubernetes.io/projected/b930ad82-3c54-40c9-b332-70aae927bcbb-kube-api-access-vtc8h\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092683 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-error\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092712 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-session\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092735 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-policies\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092816 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-login\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.092937 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093300 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093555 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-service-ca\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093613 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093630 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093640 4698 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093650 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093659 4698 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5eed092c-2837-48df-8eb4-8759235349b6-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093686 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093699 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tthwp\" (UniqueName: \"kubernetes.io/projected/5eed092c-2837-48df-8eb4-8759235349b6-kube-api-access-tthwp\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093711 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093725 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093738 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093751 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093778 4698 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eed092c-2837-48df-8eb4-8759235349b6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093826 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-audit-policies\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.093854 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.094252 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.098310 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-error\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.098354 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-login\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.098979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-session\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.099145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.099575 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-router-certs\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.099630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.100427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.107744 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b930ad82-3c54-40c9-b332-70aae927bcbb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.109059 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtc8h\" (UniqueName: \"kubernetes.io/projected/b930ad82-3c54-40c9-b332-70aae927bcbb-kube-api-access-vtc8h\") pod \"oauth-openshift-66c58d94fc-lpbpr\" (UID: \"b930ad82-3c54-40c9-b332-70aae927bcbb\") " pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.178465 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.261245 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.263477 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqg88"] Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.414397 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66c58d94fc-lpbpr"] Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.945747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" event={"ID":"b930ad82-3c54-40c9-b332-70aae927bcbb","Type":"ContainerStarted","Data":"e2ca0f96f73c8857fa1bffd3442593a6dcb5d41b7fa3f905c51af8cfcdde3dc0"} Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.945815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" event={"ID":"b930ad82-3c54-40c9-b332-70aae927bcbb","Type":"ContainerStarted","Data":"39b0807e34e2428a6bb74ec0c883604ba0ebe4810f60654b62d42a3799fdb720"} Oct 14 10:01:02 crc kubenswrapper[4698]: I1014 10:01:02.946224 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:03 crc kubenswrapper[4698]: I1014 10:01:03.042455 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eed092c-2837-48df-8eb4-8759235349b6" path="/var/lib/kubelet/pods/5eed092c-2837-48df-8eb4-8759235349b6/volumes" Oct 14 10:01:03 crc kubenswrapper[4698]: I1014 10:01:03.297975 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" Oct 14 10:01:03 crc kubenswrapper[4698]: I1014 10:01:03.326654 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66c58d94fc-lpbpr" podStartSLOduration=27.326635689 podStartE2EDuration="27.326635689s" podCreationTimestamp="2025-10-14 10:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:01:02.996149987 +0000 UTC m=+244.693449433" watchObservedRunningTime="2025-10-14 10:01:03.326635689 +0000 UTC m=+245.023935105" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.493684 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.494541 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86gfn" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="registry-server" containerID="cri-o://143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5" gracePeriod=30 Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.504683 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.504926 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w6fjg" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="registry-server" containerID="cri-o://37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290" gracePeriod=30 Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.520250 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.520522 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" containerID="cri-o://a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3" gracePeriod=30 Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.536717 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.537165 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cl9gm" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="registry-server" containerID="cri-o://fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6" gracePeriod=30 Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.542806 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6rkb"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.543606 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.545822 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.546210 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2b5sl" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="registry-server" containerID="cri-o://6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa" gracePeriod=30 Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.550625 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6rkb"] Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.641671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.641713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9st74\" (UniqueName: \"kubernetes.io/projected/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-kube-api-access-9st74\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.641736 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.742541 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.742594 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9st74\" (UniqueName: \"kubernetes.io/projected/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-kube-api-access-9st74\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.742627 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.744409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.759584 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.767483 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9st74\" (UniqueName: \"kubernetes.io/projected/717ff5f8-f2f0-46ca-86e2-dba0533d1f69-kube-api-access-9st74\") pod \"marketplace-operator-79b997595-n6rkb\" (UID: \"717ff5f8-f2f0-46ca-86e2-dba0533d1f69\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: E1014 10:01:21.777951 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36580b93_747e_4338_9c0c_dab49837aa61.slice/crio-6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04793f3a_4ff7_4fab_b3cb_756515510f54.slice/crio-37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.932262 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.934452 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.960699 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.982629 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:01:21 crc kubenswrapper[4698]: I1014 10:01:21.986934 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.023294 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.054504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities\") pod \"04793f3a-4ff7-4fab-b3cb-756515510f54\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.054622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content\") pod \"04793f3a-4ff7-4fab-b3cb-756515510f54\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.054824 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmrtt\" (UniqueName: \"kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt\") pod \"04793f3a-4ff7-4fab-b3cb-756515510f54\" (UID: \"04793f3a-4ff7-4fab-b3cb-756515510f54\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.054854 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content\") pod \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.054873 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66lgt\" (UniqueName: \"kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt\") pod \"36580b93-747e-4338-9c0c-dab49837aa61\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.055868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities" (OuterVolumeSpecName: "utilities") pod "04793f3a-4ff7-4fab-b3cb-756515510f54" (UID: "04793f3a-4ff7-4fab-b3cb-756515510f54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.058391 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt" (OuterVolumeSpecName: "kube-api-access-rmrtt") pod "04793f3a-4ff7-4fab-b3cb-756515510f54" (UID: "04793f3a-4ff7-4fab-b3cb-756515510f54"). InnerVolumeSpecName "kube-api-access-rmrtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.060691 4698 generic.go:334] "Generic (PLEG): container finished" podID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerID="a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3" exitCode=0 Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.060783 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.060842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" event={"ID":"08880ea8-e0f2-4963-826f-9bee32ca8a64","Type":"ContainerDied","Data":"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.060871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c8pq6" event={"ID":"08880ea8-e0f2-4963-826f-9bee32ca8a64","Type":"ContainerDied","Data":"07a26df22b4777f0659a8d4fce06d34a140c28086f4bc0a59e133f77d2991274"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.060889 4698 scope.go:117] "RemoveContainer" containerID="a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.061001 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt" (OuterVolumeSpecName: "kube-api-access-66lgt") pod "36580b93-747e-4338-9c0c-dab49837aa61" (UID: "36580b93-747e-4338-9c0c-dab49837aa61"). InnerVolumeSpecName "kube-api-access-66lgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.078275 4698 generic.go:334] "Generic (PLEG): container finished" podID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerID="fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6" exitCode=0 Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.078357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerDied","Data":"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.078376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl9gm" event={"ID":"97d2636c-36ca-4957-9ebe-8cc679ca9e01","Type":"ContainerDied","Data":"ccc33948dd7cd862c0a2f5d1491b927db94cf81d3d26dcc84f1c6461e2af7cc5"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.078475 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl9gm" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.098822 4698 generic.go:334] "Generic (PLEG): container finished" podID="36580b93-747e-4338-9c0c-dab49837aa61" containerID="6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa" exitCode=0 Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.098983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerDied","Data":"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.099058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b5sl" event={"ID":"36580b93-747e-4338-9c0c-dab49837aa61","Type":"ContainerDied","Data":"1ccf066092a8d2b6a8351dc3285a617741fe9077cf021d905348f69002ea4361"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.099225 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b5sl" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.104028 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97d2636c-36ca-4957-9ebe-8cc679ca9e01" (UID: "97d2636c-36ca-4957-9ebe-8cc679ca9e01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.107505 4698 generic.go:334] "Generic (PLEG): container finished" podID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerID="37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290" exitCode=0 Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.107551 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerDied","Data":"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.107728 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fjg" event={"ID":"04793f3a-4ff7-4fab-b3cb-756515510f54","Type":"ContainerDied","Data":"1b3e0c0d41be0ecf4d84b3db0dea1468fa34c5fc75d44c76e7dc4d124494e7e4"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.107581 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fjg" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.112512 4698 generic.go:334] "Generic (PLEG): container finished" podID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerID="143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5" exitCode=0 Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.112562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerDied","Data":"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.113197 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gfn" event={"ID":"d1e0097e-6470-4dd4-b86c-e5f1cecf6759","Type":"ContainerDied","Data":"af2097d7e17f7017bb749b1369c056d05ec5d6b50106b77f8b57f759e594c82b"} Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.112594 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gfn" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.115038 4698 scope.go:117] "RemoveContainer" containerID="a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.115364 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3\": container with ID starting with a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3 not found: ID does not exist" containerID="a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.115534 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3"} err="failed to get container status \"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3\": rpc error: code = NotFound desc = could not find container \"a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3\": container with ID starting with a077c8f9ca77be05190fce67330ef6bed473fbda5d0fe33a3aec498588952dc3 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.115697 4698 scope.go:117] "RemoveContainer" containerID="fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities\") pod \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157156 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca\") pod \"08880ea8-e0f2-4963-826f-9bee32ca8a64\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157195 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content\") pod \"36580b93-747e-4338-9c0c-dab49837aa61\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157232 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities\") pod \"36580b93-747e-4338-9c0c-dab49837aa61\" (UID: \"36580b93-747e-4338-9c0c-dab49837aa61\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157256 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content\") pod \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157283 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhhzg\" (UniqueName: \"kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg\") pod \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\" (UID: \"97d2636c-36ca-4957-9ebe-8cc679ca9e01\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157302 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities\") pod \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz9m8\" (UniqueName: \"kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8\") pod \"08880ea8-e0f2-4963-826f-9bee32ca8a64\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157346 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics\") pod \"08880ea8-e0f2-4963-826f-9bee32ca8a64\" (UID: \"08880ea8-e0f2-4963-826f-9bee32ca8a64\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157376 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8\") pod \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\" (UID: \"d1e0097e-6470-4dd4-b86c-e5f1cecf6759\") " Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157655 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157670 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmrtt\" (UniqueName: \"kubernetes.io/projected/04793f3a-4ff7-4fab-b3cb-756515510f54-kube-api-access-rmrtt\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157682 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157691 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66lgt\" (UniqueName: \"kubernetes.io/projected/36580b93-747e-4338-9c0c-dab49837aa61-kube-api-access-66lgt\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.157945 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities" (OuterVolumeSpecName: "utilities") pod "36580b93-747e-4338-9c0c-dab49837aa61" (UID: "36580b93-747e-4338-9c0c-dab49837aa61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.158955 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities" (OuterVolumeSpecName: "utilities") pod "d1e0097e-6470-4dd4-b86c-e5f1cecf6759" (UID: "d1e0097e-6470-4dd4-b86c-e5f1cecf6759"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.162962 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg" (OuterVolumeSpecName: "kube-api-access-dhhzg") pod "97d2636c-36ca-4957-9ebe-8cc679ca9e01" (UID: "97d2636c-36ca-4957-9ebe-8cc679ca9e01"). InnerVolumeSpecName "kube-api-access-dhhzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.163483 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8" (OuterVolumeSpecName: "kube-api-access-4vcl8") pod "d1e0097e-6470-4dd4-b86c-e5f1cecf6759" (UID: "d1e0097e-6470-4dd4-b86c-e5f1cecf6759"). InnerVolumeSpecName "kube-api-access-4vcl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.164655 4698 scope.go:117] "RemoveContainer" containerID="09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.165271 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "08880ea8-e0f2-4963-826f-9bee32ca8a64" (UID: "08880ea8-e0f2-4963-826f-9bee32ca8a64"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.170645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8" (OuterVolumeSpecName: "kube-api-access-gz9m8") pod "08880ea8-e0f2-4963-826f-9bee32ca8a64" (UID: "08880ea8-e0f2-4963-826f-9bee32ca8a64"). InnerVolumeSpecName "kube-api-access-gz9m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.172808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "08880ea8-e0f2-4963-826f-9bee32ca8a64" (UID: "08880ea8-e0f2-4963-826f-9bee32ca8a64"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.176441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities" (OuterVolumeSpecName: "utilities") pod "97d2636c-36ca-4957-9ebe-8cc679ca9e01" (UID: "97d2636c-36ca-4957-9ebe-8cc679ca9e01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.180267 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04793f3a-4ff7-4fab-b3cb-756515510f54" (UID: "04793f3a-4ff7-4fab-b3cb-756515510f54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.189426 4698 scope.go:117] "RemoveContainer" containerID="f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.211683 4698 scope.go:117] "RemoveContainer" containerID="fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.212293 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6\": container with ID starting with fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6 not found: ID does not exist" containerID="fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.212386 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6"} err="failed to get container status \"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6\": rpc error: code = NotFound desc = could not find container \"fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6\": container with ID starting with fde572e0b18d8bcbaf8adb68beed0a782039edd96ba39f8555e00876fc2560a6 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.212454 4698 scope.go:117] "RemoveContainer" containerID="09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.213372 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c\": container with ID starting with 09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c not found: ID does not exist" containerID="09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.213404 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c"} err="failed to get container status \"09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c\": rpc error: code = NotFound desc = could not find container \"09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c\": container with ID starting with 09a6105179e418a45045a37adf2803ce27c4c685e49a149b5755a9fd8e6b302c not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.213442 4698 scope.go:117] "RemoveContainer" containerID="f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.213712 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c\": container with ID starting with f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c not found: ID does not exist" containerID="f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.213729 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c"} err="failed to get container status \"f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c\": rpc error: code = NotFound desc = could not find container \"f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c\": container with ID starting with f01a4b7b39f6eb1e7d98cc1be9828061864aa5eae7e7b35fc374ea5ecab4895c not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.213741 4698 scope.go:117] "RemoveContainer" containerID="6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.233278 4698 scope.go:117] "RemoveContainer" containerID="3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.236889 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6rkb"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259007 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04793f3a-4ff7-4fab-b3cb-756515510f54-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259052 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d2636c-36ca-4957-9ebe-8cc679ca9e01-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259066 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259079 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259091 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhhzg\" (UniqueName: \"kubernetes.io/projected/97d2636c-36ca-4957-9ebe-8cc679ca9e01-kube-api-access-dhhzg\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259102 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259113 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz9m8\" (UniqueName: \"kubernetes.io/projected/08880ea8-e0f2-4963-826f-9bee32ca8a64-kube-api-access-gz9m8\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259122 4698 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08880ea8-e0f2-4963-826f-9bee32ca8a64-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.259133 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-kube-api-access-4vcl8\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.262441 4698 scope.go:117] "RemoveContainer" containerID="8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.263762 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1e0097e-6470-4dd4-b86c-e5f1cecf6759" (UID: "d1e0097e-6470-4dd4-b86c-e5f1cecf6759"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.285289 4698 scope.go:117] "RemoveContainer" containerID="6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.285867 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa\": container with ID starting with 6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa not found: ID does not exist" containerID="6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.285923 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa"} err="failed to get container status \"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa\": rpc error: code = NotFound desc = could not find container \"6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa\": container with ID starting with 6ebdc58247cadf3aba5ca70d95b77dfcd4a9bbce6d45161fe02a4450448c19fa not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.285950 4698 scope.go:117] "RemoveContainer" containerID="3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.286277 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df\": container with ID starting with 3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df not found: ID does not exist" containerID="3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.286303 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df"} err="failed to get container status \"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df\": rpc error: code = NotFound desc = could not find container \"3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df\": container with ID starting with 3675dfb353e6cbfe6bd704869d6eac3551e420e8f6a0b9f89b7ee4f8898408df not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.286319 4698 scope.go:117] "RemoveContainer" containerID="8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.287019 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a\": container with ID starting with 8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a not found: ID does not exist" containerID="8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.287075 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a"} err="failed to get container status \"8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a\": rpc error: code = NotFound desc = could not find container \"8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a\": container with ID starting with 8ddeb1fd5a4e4c0528a7ec8b53bee293c7c52e05c707d1dce6074240a79d127a not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.287108 4698 scope.go:117] "RemoveContainer" containerID="37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.304831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36580b93-747e-4338-9c0c-dab49837aa61" (UID: "36580b93-747e-4338-9c0c-dab49837aa61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.307052 4698 scope.go:117] "RemoveContainer" containerID="6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.347083 4698 scope.go:117] "RemoveContainer" containerID="02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.359530 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36580b93-747e-4338-9c0c-dab49837aa61-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.359552 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1e0097e-6470-4dd4-b86c-e5f1cecf6759-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.361847 4698 scope.go:117] "RemoveContainer" containerID="37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.368595 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290\": container with ID starting with 37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290 not found: ID does not exist" containerID="37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.368657 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290"} err="failed to get container status \"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290\": rpc error: code = NotFound desc = could not find container \"37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290\": container with ID starting with 37de6fab9b0c6263e37d7b6d0983695393ea67262be78a67d59745b1f698b290 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.368700 4698 scope.go:117] "RemoveContainer" containerID="6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.369148 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e\": container with ID starting with 6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e not found: ID does not exist" containerID="6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.369189 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e"} err="failed to get container status \"6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e\": rpc error: code = NotFound desc = could not find container \"6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e\": container with ID starting with 6e7381c6a14f7ba2a3101494094e42ea6e0454e92f0e8f22140449f19b530a8e not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.369220 4698 scope.go:117] "RemoveContainer" containerID="02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.369577 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469\": container with ID starting with 02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469 not found: ID does not exist" containerID="02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.369648 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469"} err="failed to get container status \"02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469\": rpc error: code = NotFound desc = could not find container \"02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469\": container with ID starting with 02e13f5a663d66ace45d6930955348a281c39c54dfffac8727afb15f278e0469 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.369678 4698 scope.go:117] "RemoveContainer" containerID="143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.387392 4698 scope.go:117] "RemoveContainer" containerID="1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.392320 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.397008 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c8pq6"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.412009 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.414554 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl9gm"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.424580 4698 scope.go:117] "RemoveContainer" containerID="3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.442279 4698 scope.go:117] "RemoveContainer" containerID="143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.442809 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5\": container with ID starting with 143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5 not found: ID does not exist" containerID="143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.443013 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5"} err="failed to get container status \"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5\": rpc error: code = NotFound desc = could not find container \"143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5\": container with ID starting with 143b35c26fab04170e1ed6b9f60bd3bd83123b614bd67301c776b08ca9bbffa5 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.443190 4698 scope.go:117] "RemoveContainer" containerID="1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308" Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.443814 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308\": container with ID starting with 1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308 not found: ID does not exist" containerID="1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.443926 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308"} err="failed to get container status \"1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308\": rpc error: code = NotFound desc = could not find container \"1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308\": container with ID starting with 1b1896ebf7df6ba8b64fcf8241a71ab06c0211222137dc39c91d0aa452847308 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.444056 4698 scope.go:117] "RemoveContainer" containerID="3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.445170 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 10:01:22 crc kubenswrapper[4698]: E1014 10:01:22.446265 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161\": container with ID starting with 3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161 not found: ID does not exist" containerID="3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.446318 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161"} err="failed to get container status \"3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161\": rpc error: code = NotFound desc = could not find container \"3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161\": container with ID starting with 3b8c28348d84f89d3dfa5f6f94662530c18687bdd5e35fb6615ec1f88f187161 not found: ID does not exist" Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.451043 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2b5sl"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.466982 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.470886 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w6fjg"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.481068 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 10:01:22 crc kubenswrapper[4698]: I1014 10:01:22.485431 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86gfn"] Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.025216 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" path="/var/lib/kubelet/pods/04793f3a-4ff7-4fab-b3cb-756515510f54/volumes" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.026883 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" path="/var/lib/kubelet/pods/08880ea8-e0f2-4963-826f-9bee32ca8a64/volumes" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.027618 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36580b93-747e-4338-9c0c-dab49837aa61" path="/var/lib/kubelet/pods/36580b93-747e-4338-9c0c-dab49837aa61/volumes" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.029663 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" path="/var/lib/kubelet/pods/97d2636c-36ca-4957-9ebe-8cc679ca9e01/volumes" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.033287 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" path="/var/lib/kubelet/pods/d1e0097e-6470-4dd4-b86c-e5f1cecf6759/volumes" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.122749 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" event={"ID":"717ff5f8-f2f0-46ca-86e2-dba0533d1f69","Type":"ContainerStarted","Data":"2370a25b76658c22a3a4f84946d0efb8aff7d62d1e5d3654adc61d7374e03796"} Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.122839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" event={"ID":"717ff5f8-f2f0-46ca-86e2-dba0533d1f69","Type":"ContainerStarted","Data":"88fbb330274b7631330f02b0a717595eada20b018e42a79fdd63fdaf4d93de4c"} Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.123015 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.127262 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.155451 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n6rkb" podStartSLOduration=2.1554286129999998 podStartE2EDuration="2.155428613s" podCreationTimestamp="2025-10-14 10:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:01:23.139122104 +0000 UTC m=+264.836421530" watchObservedRunningTime="2025-10-14 10:01:23.155428613 +0000 UTC m=+264.852728029" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.720643 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqffv"] Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.720901 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.720918 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.720933 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.720942 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.720956 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.720968 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.720984 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.720994 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721006 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721016 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="extract-content" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721039 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721050 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721062 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721074 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721091 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721102 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721119 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721129 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721143 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721156 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721172 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721182 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721198 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721208 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="extract-utilities" Oct 14 10:01:23 crc kubenswrapper[4698]: E1014 10:01:23.721222 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721232 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721367 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="36580b93-747e-4338-9c0c-dab49837aa61" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721386 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="08880ea8-e0f2-4963-826f-9bee32ca8a64" containerName="marketplace-operator" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721405 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d2636c-36ca-4957-9ebe-8cc679ca9e01" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721421 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="04793f3a-4ff7-4fab-b3cb-756515510f54" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.721433 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e0097e-6470-4dd4-b86c-e5f1cecf6759" containerName="registry-server" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.722411 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.729505 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.737071 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqffv"] Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.878108 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-utilities\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.878403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-catalog-content\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.878615 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mgbm\" (UniqueName: \"kubernetes.io/projected/027d093d-8507-4449-9248-3c1da8a30e2e-kube-api-access-2mgbm\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.909618 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.910614 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.915586 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.923567 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.979471 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-utilities\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.979532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-catalog-content\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.979574 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mgbm\" (UniqueName: \"kubernetes.io/projected/027d093d-8507-4449-9248-3c1da8a30e2e-kube-api-access-2mgbm\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.980414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-utilities\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:23 crc kubenswrapper[4698]: I1014 10:01:23.980646 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027d093d-8507-4449-9248-3c1da8a30e2e-catalog-content\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.002957 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mgbm\" (UniqueName: \"kubernetes.io/projected/027d093d-8507-4449-9248-3c1da8a30e2e-kube-api-access-2mgbm\") pod \"redhat-marketplace-lqffv\" (UID: \"027d093d-8507-4449-9248-3c1da8a30e2e\") " pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.040642 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.080632 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.081383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjvx\" (UniqueName: \"kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.081472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.182531 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.182590 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfjvx\" (UniqueName: \"kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.182639 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.183184 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.183814 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.199290 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfjvx\" (UniqueName: \"kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx\") pod \"redhat-operators-2q44t\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.234157 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.439466 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqffv"] Oct 14 10:01:24 crc kubenswrapper[4698]: W1014 10:01:24.442969 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod027d093d_8507_4449_9248_3c1da8a30e2e.slice/crio-7f4941cac24a3f8a2b42d6a1367232ab3afdede673641a5645b5ac2510583466 WatchSource:0}: Error finding container 7f4941cac24a3f8a2b42d6a1367232ab3afdede673641a5645b5ac2510583466: Status 404 returned error can't find the container with id 7f4941cac24a3f8a2b42d6a1367232ab3afdede673641a5645b5ac2510583466 Oct 14 10:01:24 crc kubenswrapper[4698]: I1014 10:01:24.463439 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:01:24 crc kubenswrapper[4698]: W1014 10:01:24.472468 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f2ecea_7bd2_4f73_84c5_16b5e65a0d01.slice/crio-598e35ee9aae805a4d54ced6efd192448709628431a5de7e2654d7272168228b WatchSource:0}: Error finding container 598e35ee9aae805a4d54ced6efd192448709628431a5de7e2654d7272168228b: Status 404 returned error can't find the container with id 598e35ee9aae805a4d54ced6efd192448709628431a5de7e2654d7272168228b Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.141756 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerID="aecdd8963aa9ee4ad54beaacd9c9e0243702fcac5ee0f323fa517fc3768bab24" exitCode=0 Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.141879 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerDied","Data":"aecdd8963aa9ee4ad54beaacd9c9e0243702fcac5ee0f323fa517fc3768bab24"} Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.142072 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerStarted","Data":"598e35ee9aae805a4d54ced6efd192448709628431a5de7e2654d7272168228b"} Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.143741 4698 generic.go:334] "Generic (PLEG): container finished" podID="027d093d-8507-4449-9248-3c1da8a30e2e" containerID="ddd2cb07cfe64779f1b0ce169402e25cae53faaae8feb390ebe6065f649daacd" exitCode=0 Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.144302 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqffv" event={"ID":"027d093d-8507-4449-9248-3c1da8a30e2e","Type":"ContainerDied","Data":"ddd2cb07cfe64779f1b0ce169402e25cae53faaae8feb390ebe6065f649daacd"} Oct 14 10:01:25 crc kubenswrapper[4698]: I1014 10:01:25.144393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqffv" event={"ID":"027d093d-8507-4449-9248-3c1da8a30e2e","Type":"ContainerStarted","Data":"7f4941cac24a3f8a2b42d6a1367232ab3afdede673641a5645b5ac2510583466"} Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.121393 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.122374 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.126366 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.129811 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.153386 4698 generic.go:334] "Generic (PLEG): container finished" podID="027d093d-8507-4449-9248-3c1da8a30e2e" containerID="e45c4a99a5b1e0dc569218eeaaf7aeda6a7a91dbf9ab8691e24c49fa4bc8b333" exitCode=0 Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.153439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqffv" event={"ID":"027d093d-8507-4449-9248-3c1da8a30e2e","Type":"ContainerDied","Data":"e45c4a99a5b1e0dc569218eeaaf7aeda6a7a91dbf9ab8691e24c49fa4bc8b333"} Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.155276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerStarted","Data":"3f97fb12cfa6569761381253ac201b1e5bbd4e554caa438a8a6e12d5b06f701b"} Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.213639 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsbz\" (UniqueName: \"kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.213701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.213927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.309945 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpc8b"] Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.311233 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.313833 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.314801 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-catalog-content\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.314841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhsbz\" (UniqueName: \"kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.314873 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6j79\" (UniqueName: \"kubernetes.io/projected/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-kube-api-access-c6j79\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.314926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.314955 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-utilities\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.315010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.315440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.315619 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.321001 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpc8b"] Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.345536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhsbz\" (UniqueName: \"kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz\") pod \"community-operators-dj96p\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.415703 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-utilities\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.416009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-catalog-content\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.416043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6j79\" (UniqueName: \"kubernetes.io/projected/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-kube-api-access-c6j79\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.416604 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-utilities\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.417053 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-catalog-content\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.434102 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6j79\" (UniqueName: \"kubernetes.io/projected/1e06ea61-0f4c-4611-a4d8-dcf08a89c881-kube-api-access-c6j79\") pod \"certified-operators-wpc8b\" (UID: \"1e06ea61-0f4c-4611-a4d8-dcf08a89c881\") " pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.442021 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.675242 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:26 crc kubenswrapper[4698]: I1014 10:01:26.845367 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:01:26 crc kubenswrapper[4698]: W1014 10:01:26.853220 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod237bd431_a961_4f87_a13c_2278c27b67e0.slice/crio-8a59cc73c9eef710b25476591a2a38454169139ad49089138745fee66e31506f WatchSource:0}: Error finding container 8a59cc73c9eef710b25476591a2a38454169139ad49089138745fee66e31506f: Status 404 returned error can't find the container with id 8a59cc73c9eef710b25476591a2a38454169139ad49089138745fee66e31506f Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.053265 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpc8b"] Oct 14 10:01:27 crc kubenswrapper[4698]: W1014 10:01:27.056638 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e06ea61_0f4c_4611_a4d8_dcf08a89c881.slice/crio-70d6d49b3ff66e74cfcb196171ff4e64466b93a99ed0871c5a052f394125d6a1 WatchSource:0}: Error finding container 70d6d49b3ff66e74cfcb196171ff4e64466b93a99ed0871c5a052f394125d6a1: Status 404 returned error can't find the container with id 70d6d49b3ff66e74cfcb196171ff4e64466b93a99ed0871c5a052f394125d6a1 Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.162330 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerID="3f97fb12cfa6569761381253ac201b1e5bbd4e554caa438a8a6e12d5b06f701b" exitCode=0 Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.162381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerDied","Data":"3f97fb12cfa6569761381253ac201b1e5bbd4e554caa438a8a6e12d5b06f701b"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.164813 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerStarted","Data":"29078e1c2f5ac8fe3ba75a42a4199649b2b6cadfa2005c421636f059a94aaec7"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.164842 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerStarted","Data":"70d6d49b3ff66e74cfcb196171ff4e64466b93a99ed0871c5a052f394125d6a1"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.167601 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqffv" event={"ID":"027d093d-8507-4449-9248-3c1da8a30e2e","Type":"ContainerStarted","Data":"151e73c570bb2954891924b8a18e21f2468205692402890d6c49b0e4dcb39dd5"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.169869 4698 generic.go:334] "Generic (PLEG): container finished" podID="237bd431-a961-4f87-a13c-2278c27b67e0" containerID="3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65" exitCode=0 Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.169894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerDied","Data":"3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.169907 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerStarted","Data":"8a59cc73c9eef710b25476591a2a38454169139ad49089138745fee66e31506f"} Oct 14 10:01:27 crc kubenswrapper[4698]: I1014 10:01:27.237380 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqffv" podStartSLOduration=2.711449021 podStartE2EDuration="4.237367226s" podCreationTimestamp="2025-10-14 10:01:23 +0000 UTC" firstStartedPulling="2025-10-14 10:01:25.146136997 +0000 UTC m=+266.843436413" lastFinishedPulling="2025-10-14 10:01:26.672055202 +0000 UTC m=+268.369354618" observedRunningTime="2025-10-14 10:01:27.234869613 +0000 UTC m=+268.932169039" watchObservedRunningTime="2025-10-14 10:01:27.237367226 +0000 UTC m=+268.934666642" Oct 14 10:01:28 crc kubenswrapper[4698]: I1014 10:01:28.176648 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerDied","Data":"29078e1c2f5ac8fe3ba75a42a4199649b2b6cadfa2005c421636f059a94aaec7"} Oct 14 10:01:28 crc kubenswrapper[4698]: I1014 10:01:28.176605 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e06ea61-0f4c-4611-a4d8-dcf08a89c881" containerID="29078e1c2f5ac8fe3ba75a42a4199649b2b6cadfa2005c421636f059a94aaec7" exitCode=0 Oct 14 10:01:29 crc kubenswrapper[4698]: I1014 10:01:29.183443 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerStarted","Data":"5ebfb750dc4abd2501a3b51c10acb8adb9119acc736bd840f2cb016c64f9bb45"} Oct 14 10:01:29 crc kubenswrapper[4698]: I1014 10:01:29.186749 4698 generic.go:334] "Generic (PLEG): container finished" podID="237bd431-a961-4f87-a13c-2278c27b67e0" containerID="0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc" exitCode=0 Oct 14 10:01:29 crc kubenswrapper[4698]: I1014 10:01:29.186796 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerDied","Data":"0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc"} Oct 14 10:01:29 crc kubenswrapper[4698]: I1014 10:01:29.210106 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2q44t" podStartSLOduration=3.147228168 podStartE2EDuration="6.210085361s" podCreationTimestamp="2025-10-14 10:01:23 +0000 UTC" firstStartedPulling="2025-10-14 10:01:25.144324424 +0000 UTC m=+266.841623840" lastFinishedPulling="2025-10-14 10:01:28.207181607 +0000 UTC m=+269.904481033" observedRunningTime="2025-10-14 10:01:29.204203398 +0000 UTC m=+270.901502814" watchObservedRunningTime="2025-10-14 10:01:29.210085361 +0000 UTC m=+270.907384787" Oct 14 10:01:30 crc kubenswrapper[4698]: I1014 10:01:30.197371 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerStarted","Data":"4da3d2a7a63cf0c94d79ea416df14b5c8183f0f4512b5d2eb1ffc12a4011cb9d"} Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.203142 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerStarted","Data":"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75"} Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.208097 4698 generic.go:334] "Generic (PLEG): container finished" podID="1e06ea61-0f4c-4611-a4d8-dcf08a89c881" containerID="4da3d2a7a63cf0c94d79ea416df14b5c8183f0f4512b5d2eb1ffc12a4011cb9d" exitCode=0 Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.208137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerDied","Data":"4da3d2a7a63cf0c94d79ea416df14b5c8183f0f4512b5d2eb1ffc12a4011cb9d"} Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.208163 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpc8b" event={"ID":"1e06ea61-0f4c-4611-a4d8-dcf08a89c881","Type":"ContainerStarted","Data":"c37302c34f51f37b9c34b4df00c9765777bca2e3b9d231d91bcb711bb61a86ef"} Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.225529 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dj96p" podStartSLOduration=2.413697257 podStartE2EDuration="5.225511902s" podCreationTimestamp="2025-10-14 10:01:26 +0000 UTC" firstStartedPulling="2025-10-14 10:01:27.170872952 +0000 UTC m=+268.868172368" lastFinishedPulling="2025-10-14 10:01:29.982687597 +0000 UTC m=+271.679987013" observedRunningTime="2025-10-14 10:01:31.22069723 +0000 UTC m=+272.917996656" watchObservedRunningTime="2025-10-14 10:01:31.225511902 +0000 UTC m=+272.922811318" Oct 14 10:01:31 crc kubenswrapper[4698]: I1014 10:01:31.242715 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpc8b" podStartSLOduration=2.7757033250000003 podStartE2EDuration="5.242693187s" podCreationTimestamp="2025-10-14 10:01:26 +0000 UTC" firstStartedPulling="2025-10-14 10:01:28.201798289 +0000 UTC m=+269.899097705" lastFinishedPulling="2025-10-14 10:01:30.668788151 +0000 UTC m=+272.366087567" observedRunningTime="2025-10-14 10:01:31.240563784 +0000 UTC m=+272.937863220" watchObservedRunningTime="2025-10-14 10:01:31.242693187 +0000 UTC m=+272.939992613" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.040977 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.041292 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.090821 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.234813 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.234853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.256588 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqffv" Oct 14 10:01:34 crc kubenswrapper[4698]: I1014 10:01:34.290116 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:35 crc kubenswrapper[4698]: I1014 10:01:35.269371 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.442447 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.442524 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.511195 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.676507 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.676577 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:36 crc kubenswrapper[4698]: I1014 10:01:36.735790 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:01:37 crc kubenswrapper[4698]: I1014 10:01:37.281688 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:01:37 crc kubenswrapper[4698]: I1014 10:01:37.284659 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpc8b" Oct 14 10:02:53 crc kubenswrapper[4698]: I1014 10:02:53.908821 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:02:53 crc kubenswrapper[4698]: I1014 10:02:53.909513 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:03:23 crc kubenswrapper[4698]: I1014 10:03:23.908863 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:03:23 crc kubenswrapper[4698]: I1014 10:03:23.909692 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.828435 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5v276"] Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.829602 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.848013 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5v276"] Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.939846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzln4\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-kube-api-access-mzln4\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.939918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.939984 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.940025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-trusted-ca\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.940048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.940083 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-bound-sa-token\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.940250 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-tls\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.940305 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-certificates\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:32 crc kubenswrapper[4698]: I1014 10:03:32.964834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041025 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-bound-sa-token\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041084 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-tls\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-certificates\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041149 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzln4\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-kube-api-access-mzln4\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041188 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041213 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.041231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-trusted-ca\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.042264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.042842 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-trusted-ca\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.044014 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-certificates\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.049493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.049814 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-registry-tls\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.060698 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzln4\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-kube-api-access-mzln4\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.073374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43625508-c3ee-479a-a8e6-5e7eb7d73f4b-bound-sa-token\") pod \"image-registry-66df7c8f76-5v276\" (UID: \"43625508-c3ee-479a-a8e6-5e7eb7d73f4b\") " pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.149671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.398268 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5v276"] Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.950934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" event={"ID":"43625508-c3ee-479a-a8e6-5e7eb7d73f4b","Type":"ContainerStarted","Data":"46e9eb97d4e92adf47030c5e3294d5a1a072e8a6c600baa2849e376da3284530"} Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.951238 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" event={"ID":"43625508-c3ee-479a-a8e6-5e7eb7d73f4b","Type":"ContainerStarted","Data":"2248aaf91dd99147bd292029e01bb35a28510319d1b1d30e87e58e553ec02633"} Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.951277 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:33 crc kubenswrapper[4698]: I1014 10:03:33.979652 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" podStartSLOduration=1.9796317989999999 podStartE2EDuration="1.979631799s" podCreationTimestamp="2025-10-14 10:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:03:33.978937787 +0000 UTC m=+395.676237223" watchObservedRunningTime="2025-10-14 10:03:33.979631799 +0000 UTC m=+395.676931225" Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.155944 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5v276" Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.228827 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.908037 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.908546 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.908643 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.909832 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:03:53 crc kubenswrapper[4698]: I1014 10:03:53.909954 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b" gracePeriod=600 Oct 14 10:03:54 crc kubenswrapper[4698]: I1014 10:03:54.081071 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b" exitCode=0 Oct 14 10:03:54 crc kubenswrapper[4698]: I1014 10:03:54.081112 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b"} Oct 14 10:03:54 crc kubenswrapper[4698]: I1014 10:03:54.081162 4698 scope.go:117] "RemoveContainer" containerID="8068aecfbfc156636e404dba3daa36e0bebb92ecf3c63158eaebf9a03cdc96b1" Oct 14 10:03:55 crc kubenswrapper[4698]: I1014 10:03:55.092230 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5"} Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.294006 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" podUID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" containerName="registry" containerID="cri-o://51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5" gracePeriod=30 Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.664962 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.834671 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.834846 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.834897 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.834989 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.835071 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.835119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.835171 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.835227 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw22h\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h\") pod \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\" (UID: \"c6555f7d-6e37-41d2-8f98-b02aba5270ab\") " Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.835987 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.836032 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.841385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.841508 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.844100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.846082 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h" (OuterVolumeSpecName: "kube-api-access-jw22h") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "kube-api-access-jw22h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.846412 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.874544 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c6555f7d-6e37-41d2-8f98-b02aba5270ab" (UID: "c6555f7d-6e37-41d2-8f98-b02aba5270ab"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936674 4698 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936721 4698 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6555f7d-6e37-41d2-8f98-b02aba5270ab-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936739 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6555f7d-6e37-41d2-8f98-b02aba5270ab-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936757 4698 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936801 4698 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936819 4698 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6555f7d-6e37-41d2-8f98-b02aba5270ab-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:18 crc kubenswrapper[4698]: I1014 10:04:18.936836 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw22h\" (UniqueName: \"kubernetes.io/projected/c6555f7d-6e37-41d2-8f98-b02aba5270ab-kube-api-access-jw22h\") on node \"crc\" DevicePath \"\"" Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.242161 4698 generic.go:334] "Generic (PLEG): container finished" podID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" containerID="51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5" exitCode=0 Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.242211 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" event={"ID":"c6555f7d-6e37-41d2-8f98-b02aba5270ab","Type":"ContainerDied","Data":"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5"} Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.242276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" event={"ID":"c6555f7d-6e37-41d2-8f98-b02aba5270ab","Type":"ContainerDied","Data":"1611aa72c8b049bb25aeacbf71e6df36d06fe3345a9f8bbb78c5fbc5a9c8d3fc"} Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.242308 4698 scope.go:117] "RemoveContainer" containerID="51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5" Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.242311 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nthfk" Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.260638 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.267977 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nthfk"] Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.273222 4698 scope.go:117] "RemoveContainer" containerID="51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5" Oct 14 10:04:19 crc kubenswrapper[4698]: E1014 10:04:19.273510 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5\": container with ID starting with 51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5 not found: ID does not exist" containerID="51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5" Oct 14 10:04:19 crc kubenswrapper[4698]: I1014 10:04:19.273537 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5"} err="failed to get container status \"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5\": rpc error: code = NotFound desc = could not find container \"51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5\": container with ID starting with 51ac4ad459c5d328d47d9481b390e9fd0f1c2e26a925009a43e02f38e8f05ca5 not found: ID does not exist" Oct 14 10:04:21 crc kubenswrapper[4698]: I1014 10:04:21.027832 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" path="/var/lib/kubelet/pods/c6555f7d-6e37-41d2-8f98-b02aba5270ab/volumes" Oct 14 10:06:23 crc kubenswrapper[4698]: I1014 10:06:23.908177 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:06:23 crc kubenswrapper[4698]: I1014 10:06:23.908844 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.726817 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6hh8m"] Oct 14 10:06:25 crc kubenswrapper[4698]: E1014 10:06:25.727364 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" containerName="registry" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.727379 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" containerName="registry" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.727488 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6555f7d-6e37-41d2-8f98-b02aba5270ab" containerName="registry" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.727920 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6hh8m" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.731554 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f44qv"] Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.732246 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.743579 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.743587 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.744060 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-krg2d" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.748826 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6hh8m"] Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.756577 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5c6vd" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.760477 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f44qv"] Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.763427 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-92ws8"] Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.764218 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.769567 4698 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rlh6x" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.791182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-92ws8"] Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.879245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7ws\" (UniqueName: \"kubernetes.io/projected/4e2060ed-feb8-4937-a34d-58686e380b4b-kube-api-access-gv7ws\") pod \"cert-manager-5b446d88c5-6hh8m\" (UID: \"4e2060ed-feb8-4937-a34d-58686e380b4b\") " pod="cert-manager/cert-manager-5b446d88c5-6hh8m" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.879327 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8sh\" (UniqueName: \"kubernetes.io/projected/1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c-kube-api-access-zm8sh\") pod \"cert-manager-webhook-5655c58dd6-92ws8\" (UID: \"1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.879359 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khk5c\" (UniqueName: \"kubernetes.io/projected/c425683e-dad1-4ebb-8992-8a979383addb-kube-api-access-khk5c\") pod \"cert-manager-cainjector-7f985d654d-f44qv\" (UID: \"c425683e-dad1-4ebb-8992-8a979383addb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.980260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khk5c\" (UniqueName: \"kubernetes.io/projected/c425683e-dad1-4ebb-8992-8a979383addb-kube-api-access-khk5c\") pod \"cert-manager-cainjector-7f985d654d-f44qv\" (UID: \"c425683e-dad1-4ebb-8992-8a979383addb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.980307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7ws\" (UniqueName: \"kubernetes.io/projected/4e2060ed-feb8-4937-a34d-58686e380b4b-kube-api-access-gv7ws\") pod \"cert-manager-5b446d88c5-6hh8m\" (UID: \"4e2060ed-feb8-4937-a34d-58686e380b4b\") " pod="cert-manager/cert-manager-5b446d88c5-6hh8m" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.980370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8sh\" (UniqueName: \"kubernetes.io/projected/1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c-kube-api-access-zm8sh\") pod \"cert-manager-webhook-5655c58dd6-92ws8\" (UID: \"1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:25 crc kubenswrapper[4698]: I1014 10:06:25.999196 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7ws\" (UniqueName: \"kubernetes.io/projected/4e2060ed-feb8-4937-a34d-58686e380b4b-kube-api-access-gv7ws\") pod \"cert-manager-5b446d88c5-6hh8m\" (UID: \"4e2060ed-feb8-4937-a34d-58686e380b4b\") " pod="cert-manager/cert-manager-5b446d88c5-6hh8m" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.000180 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8sh\" (UniqueName: \"kubernetes.io/projected/1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c-kube-api-access-zm8sh\") pod \"cert-manager-webhook-5655c58dd6-92ws8\" (UID: \"1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.003517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khk5c\" (UniqueName: \"kubernetes.io/projected/c425683e-dad1-4ebb-8992-8a979383addb-kube-api-access-khk5c\") pod \"cert-manager-cainjector-7f985d654d-f44qv\" (UID: \"c425683e-dad1-4ebb-8992-8a979383addb\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.047713 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6hh8m" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.057456 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.077051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.336170 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-92ws8"] Oct 14 10:06:26 crc kubenswrapper[4698]: W1014 10:06:26.345532 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6c98fe_c2ad_4723_ab60_1af5e7e3e58c.slice/crio-a2f5612860880a91705b0998a2a07fabacd2bc96c2aaac0fb68911d4cb93410a WatchSource:0}: Error finding container a2f5612860880a91705b0998a2a07fabacd2bc96c2aaac0fb68911d4cb93410a: Status 404 returned error can't find the container with id a2f5612860880a91705b0998a2a07fabacd2bc96c2aaac0fb68911d4cb93410a Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.348730 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.488417 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f44qv"] Oct 14 10:06:26 crc kubenswrapper[4698]: I1014 10:06:26.491656 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6hh8m"] Oct 14 10:06:26 crc kubenswrapper[4698]: W1014 10:06:26.492296 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e2060ed_feb8_4937_a34d_58686e380b4b.slice/crio-54e1a53e4cf3c0d3033ec3f2ae4cb4b70ddd3e037f322dc01795fa244c9ce7a7 WatchSource:0}: Error finding container 54e1a53e4cf3c0d3033ec3f2ae4cb4b70ddd3e037f322dc01795fa244c9ce7a7: Status 404 returned error can't find the container with id 54e1a53e4cf3c0d3033ec3f2ae4cb4b70ddd3e037f322dc01795fa244c9ce7a7 Oct 14 10:06:27 crc kubenswrapper[4698]: I1014 10:06:27.082716 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" event={"ID":"1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c","Type":"ContainerStarted","Data":"a2f5612860880a91705b0998a2a07fabacd2bc96c2aaac0fb68911d4cb93410a"} Oct 14 10:06:27 crc kubenswrapper[4698]: I1014 10:06:27.085236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6hh8m" event={"ID":"4e2060ed-feb8-4937-a34d-58686e380b4b","Type":"ContainerStarted","Data":"54e1a53e4cf3c0d3033ec3f2ae4cb4b70ddd3e037f322dc01795fa244c9ce7a7"} Oct 14 10:06:27 crc kubenswrapper[4698]: I1014 10:06:27.086269 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" event={"ID":"c425683e-dad1-4ebb-8992-8a979383addb","Type":"ContainerStarted","Data":"7d7ba47b84c7051e63a65a3315dd3922f7c0d9b1c73ef3d66015799f9a33889c"} Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.108334 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" event={"ID":"1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c","Type":"ContainerStarted","Data":"83501a552486778eeddd1992a90abc185685716b92db331a8486ae4ae39676d8"} Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.110851 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.112208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6hh8m" event={"ID":"4e2060ed-feb8-4937-a34d-58686e380b4b","Type":"ContainerStarted","Data":"8f7be84f27405ee6e657ff3876e7225b6b62dd8a5a15669248cc6214b7d0516f"} Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.114711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" event={"ID":"c425683e-dad1-4ebb-8992-8a979383addb","Type":"ContainerStarted","Data":"d5484dca579dfa614e9c92856d601690bca6e388528c2002793e83439e0122cc"} Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.127823 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" podStartSLOduration=2.482765225 podStartE2EDuration="5.127805578s" podCreationTimestamp="2025-10-14 10:06:25 +0000 UTC" firstStartedPulling="2025-10-14 10:06:26.348537251 +0000 UTC m=+568.045836667" lastFinishedPulling="2025-10-14 10:06:28.993577594 +0000 UTC m=+570.690877020" observedRunningTime="2025-10-14 10:06:30.125661194 +0000 UTC m=+571.822960620" watchObservedRunningTime="2025-10-14 10:06:30.127805578 +0000 UTC m=+571.825104994" Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.144547 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-f44qv" podStartSLOduration=2.168821493 podStartE2EDuration="5.144532023s" podCreationTimestamp="2025-10-14 10:06:25 +0000 UTC" firstStartedPulling="2025-10-14 10:06:26.501479619 +0000 UTC m=+568.198779035" lastFinishedPulling="2025-10-14 10:06:29.477190149 +0000 UTC m=+571.174489565" observedRunningTime="2025-10-14 10:06:30.141611086 +0000 UTC m=+571.838910512" watchObservedRunningTime="2025-10-14 10:06:30.144532023 +0000 UTC m=+571.841831439" Oct 14 10:06:30 crc kubenswrapper[4698]: I1014 10:06:30.160794 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-6hh8m" podStartSLOduration=2.227670024 podStartE2EDuration="5.160755863s" podCreationTimestamp="2025-10-14 10:06:25 +0000 UTC" firstStartedPulling="2025-10-14 10:06:26.494816491 +0000 UTC m=+568.192115907" lastFinishedPulling="2025-10-14 10:06:29.42790233 +0000 UTC m=+571.125201746" observedRunningTime="2025-10-14 10:06:30.157712273 +0000 UTC m=+571.855011699" watchObservedRunningTime="2025-10-14 10:06:30.160755863 +0000 UTC m=+571.858055279" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.080294 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-92ws8" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.491696 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hspfz"] Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492608 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-controller" containerID="cri-o://11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492681 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="northd" containerID="cri-o://a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492752 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-acl-logging" containerID="cri-o://90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492793 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492870 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="nbdb" containerID="cri-o://7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492825 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="sbdb" containerID="cri-o://626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.492759 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-node" containerID="cri-o://baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.541857 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" containerID="cri-o://69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" gracePeriod=30 Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.813099 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/3.log" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.816265 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovn-acl-logging/0.log" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.816827 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovn-controller/0.log" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.817482 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860459 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdmsh"] Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860654 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860664 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860674 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="sbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860680 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="sbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860689 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860695 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860704 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-node" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860709 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-node" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860717 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860722 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860728 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860736 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860743 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="northd" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860750 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="northd" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860758 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="nbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860781 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="nbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860788 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860794 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860802 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860808 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860818 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-acl-logging" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860823 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-acl-logging" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.860829 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kubecfg-setup" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860834 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kubecfg-setup" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860922 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860931 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860939 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860946 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-acl-logging" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860955 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860963 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="northd" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860972 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="sbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860979 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="kube-rbac-proxy-node" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860988 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovn-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.860995 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="nbdb" Oct 14 10:06:36 crc kubenswrapper[4698]: E1014 10:06:36.861083 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.861089 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.861164 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.861174 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerName="ovnkube-controller" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.862835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.925952 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926042 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926056 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926081 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926147 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjlwb\" (UniqueName: \"kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926162 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926177 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926192 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926213 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926233 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926276 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926287 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926329 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926349 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926386 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units\") pod \"d02f5359-81fc-4261-b995-e58c78bcec0e\" (UID: \"d02f5359-81fc-4261-b995-e58c78bcec0e\") " Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926599 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926629 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.926995 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927056 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927060 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927075 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log" (OuterVolumeSpecName: "node-log") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927083 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash" (OuterVolumeSpecName: "host-slash") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927090 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927103 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket" (OuterVolumeSpecName: "log-socket") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927095 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927117 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927126 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927126 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927135 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927476 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.927488 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.941167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb" (OuterVolumeSpecName: "kube-api-access-pjlwb") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "kube-api-access-pjlwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.948405 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:06:36 crc kubenswrapper[4698]: I1014 10:06:36.954596 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d02f5359-81fc-4261-b995-e58c78bcec0e" (UID: "d02f5359-81fc-4261-b995-e58c78bcec0e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027548 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-env-overrides\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027584 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-systemd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027598 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-ovn\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027613 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-bin\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-systemd-units\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-slash\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027746 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-netd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-var-lib-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-config\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-log-socket\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027957 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5gw\" (UniqueName: \"kubernetes.io/projected/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-kube-api-access-6t5gw\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.027985 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-netns\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-etc-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028057 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-script-lib\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028082 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-node-log\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-kubelet\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028127 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028171 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028223 4698 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028238 4698 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-log-socket\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028251 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjlwb\" (UniqueName: \"kubernetes.io/projected/d02f5359-81fc-4261-b995-e58c78bcec0e-kube-api-access-pjlwb\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028263 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028274 4698 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028285 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028324 4698 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028353 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d02f5359-81fc-4261-b995-e58c78bcec0e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028375 4698 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-node-log\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028392 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028409 4698 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028426 4698 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028440 4698 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-slash\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028452 4698 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028472 4698 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028490 4698 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028510 4698 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d02f5359-81fc-4261-b995-e58c78bcec0e-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028526 4698 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028543 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.028558 4698 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d02f5359-81fc-4261-b995-e58c78bcec0e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130030 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-kubelet\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130083 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-env-overrides\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130179 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-systemd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-kubelet\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130193 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-ovn\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130258 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-bin\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130303 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-systemd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130300 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-systemd-units\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130345 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-ovn\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-systemd-units\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-slash\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-run-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-bin\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-slash\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130562 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-netd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130607 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-var-lib-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130644 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-config\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-log-socket\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130690 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-cni-netd\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130720 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-env-overrides\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130827 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5gw\" (UniqueName: \"kubernetes.io/projected/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-kube-api-access-6t5gw\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130864 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-log-socket\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130880 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-netns\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130902 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-var-lib-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130958 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-etc-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130989 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-etc-openvswitch\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-script-lib\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.131025 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-node-log\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.130965 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-run-netns\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.131070 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-node-log\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.131215 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.131803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-config\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.132085 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovnkube-script-lib\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.133855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-ovn-node-metrics-cert\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.146647 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5gw\" (UniqueName: \"kubernetes.io/projected/c28c0915-2a70-4992-8b12-e9e3d5ba8ab6-kube-api-access-6t5gw\") pod \"ovnkube-node-bdmsh\" (UID: \"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.160394 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovnkube-controller/3.log" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.163189 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovn-acl-logging/0.log" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.163866 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hspfz_d02f5359-81fc-4261-b995-e58c78bcec0e/ovn-controller/0.log" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164313 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164348 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164363 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164378 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164390 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164401 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" exitCode=0 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164413 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" exitCode=143 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164424 4698 generic.go:334] "Generic (PLEG): container finished" podID="d02f5359-81fc-4261-b995-e58c78bcec0e" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" exitCode=143 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164437 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164455 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164509 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164534 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164582 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164590 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164609 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164635 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164658 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164673 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164718 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164729 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164743 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164756 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164798 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164813 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164834 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164860 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164878 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164893 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164908 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164923 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164936 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164950 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164964 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164978 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.164993 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165015 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165038 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165057 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165071 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165086 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165102 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165117 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165131 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165145 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165161 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165175 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165200 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hspfz" event={"ID":"d02f5359-81fc-4261-b995-e58c78bcec0e","Type":"ContainerDied","Data":"c889fa6542bed3a81090fc56d086523fb2ad50d74105cfa09a0abf5ecfe4b185"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165225 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165242 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165257 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165274 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165289 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165303 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165318 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165332 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165347 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.165363 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.166614 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/2.log" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.167879 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/1.log" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.167924 4698 generic.go:334] "Generic (PLEG): container finished" podID="fbf10bbc-318d-4f46-83a0-fdbad9888201" containerID="229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe" exitCode=2 Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.167956 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerDied","Data":"229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.167979 4698 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522"} Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.168421 4698 scope.go:117] "RemoveContainer" containerID="229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.168647 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b7cbk_openshift-multus(fbf10bbc-318d-4f46-83a0-fdbad9888201)\"" pod="openshift-multus/multus-b7cbk" podUID="fbf10bbc-318d-4f46-83a0-fdbad9888201" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.177308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.187521 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hspfz"] Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.191468 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.195196 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hspfz"] Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.225719 4698 scope.go:117] "RemoveContainer" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.251474 4698 scope.go:117] "RemoveContainer" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.270749 4698 scope.go:117] "RemoveContainer" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.290505 4698 scope.go:117] "RemoveContainer" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.307593 4698 scope.go:117] "RemoveContainer" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.331464 4698 scope.go:117] "RemoveContainer" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.347875 4698 scope.go:117] "RemoveContainer" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.363447 4698 scope.go:117] "RemoveContainer" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.378659 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.378975 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379006 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} err="failed to get container status \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379028 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.379211 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": container with ID starting with 6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2 not found: ID does not exist" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379242 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} err="failed to get container status \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": rpc error: code = NotFound desc = could not find container \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": container with ID starting with 6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379259 4698 scope.go:117] "RemoveContainer" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.379573 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": container with ID starting with 626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95 not found: ID does not exist" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379603 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} err="failed to get container status \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": rpc error: code = NotFound desc = could not find container \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": container with ID starting with 626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379622 4698 scope.go:117] "RemoveContainer" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.379828 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": container with ID starting with 7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118 not found: ID does not exist" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379844 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} err="failed to get container status \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": rpc error: code = NotFound desc = could not find container \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": container with ID starting with 7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.379858 4698 scope.go:117] "RemoveContainer" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.380124 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": container with ID starting with a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db not found: ID does not exist" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380155 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} err="failed to get container status \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": rpc error: code = NotFound desc = could not find container \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": container with ID starting with a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380175 4698 scope.go:117] "RemoveContainer" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.380440 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": container with ID starting with 876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5 not found: ID does not exist" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380467 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} err="failed to get container status \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": rpc error: code = NotFound desc = could not find container \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": container with ID starting with 876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380482 4698 scope.go:117] "RemoveContainer" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.380707 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": container with ID starting with baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e not found: ID does not exist" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380732 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} err="failed to get container status \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": rpc error: code = NotFound desc = could not find container \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": container with ID starting with baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.380747 4698 scope.go:117] "RemoveContainer" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.381352 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": container with ID starting with 90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e not found: ID does not exist" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.381374 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} err="failed to get container status \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": rpc error: code = NotFound desc = could not find container \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": container with ID starting with 90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.381387 4698 scope.go:117] "RemoveContainer" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.381647 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": container with ID starting with 11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e not found: ID does not exist" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.381669 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} err="failed to get container status \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": rpc error: code = NotFound desc = could not find container \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": container with ID starting with 11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.381685 4698 scope.go:117] "RemoveContainer" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: E1014 10:06:37.381984 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": container with ID starting with 0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b not found: ID does not exist" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382007 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} err="failed to get container status \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": rpc error: code = NotFound desc = could not find container \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": container with ID starting with 0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382023 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382331 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} err="failed to get container status \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382383 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382713 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} err="failed to get container status \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": rpc error: code = NotFound desc = could not find container \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": container with ID starting with 6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.382738 4698 scope.go:117] "RemoveContainer" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383002 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} err="failed to get container status \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": rpc error: code = NotFound desc = could not find container \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": container with ID starting with 626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383017 4698 scope.go:117] "RemoveContainer" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383260 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} err="failed to get container status \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": rpc error: code = NotFound desc = could not find container \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": container with ID starting with 7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383281 4698 scope.go:117] "RemoveContainer" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383466 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} err="failed to get container status \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": rpc error: code = NotFound desc = could not find container \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": container with ID starting with a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383489 4698 scope.go:117] "RemoveContainer" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383636 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} err="failed to get container status \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": rpc error: code = NotFound desc = could not find container \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": container with ID starting with 876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383661 4698 scope.go:117] "RemoveContainer" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383956 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} err="failed to get container status \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": rpc error: code = NotFound desc = could not find container \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": container with ID starting with baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.383975 4698 scope.go:117] "RemoveContainer" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384245 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} err="failed to get container status \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": rpc error: code = NotFound desc = could not find container \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": container with ID starting with 90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384263 4698 scope.go:117] "RemoveContainer" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384424 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} err="failed to get container status \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": rpc error: code = NotFound desc = could not find container \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": container with ID starting with 11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384442 4698 scope.go:117] "RemoveContainer" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384618 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} err="failed to get container status \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": rpc error: code = NotFound desc = could not find container \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": container with ID starting with 0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384637 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384844 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} err="failed to get container status \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.384865 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385168 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} err="failed to get container status \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": rpc error: code = NotFound desc = could not find container \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": container with ID starting with 6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385197 4698 scope.go:117] "RemoveContainer" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385494 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} err="failed to get container status \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": rpc error: code = NotFound desc = could not find container \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": container with ID starting with 626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385534 4698 scope.go:117] "RemoveContainer" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385910 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} err="failed to get container status \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": rpc error: code = NotFound desc = could not find container \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": container with ID starting with 7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.385941 4698 scope.go:117] "RemoveContainer" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386246 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} err="failed to get container status \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": rpc error: code = NotFound desc = could not find container \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": container with ID starting with a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386296 4698 scope.go:117] "RemoveContainer" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386548 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} err="failed to get container status \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": rpc error: code = NotFound desc = could not find container \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": container with ID starting with 876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386578 4698 scope.go:117] "RemoveContainer" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386817 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} err="failed to get container status \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": rpc error: code = NotFound desc = could not find container \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": container with ID starting with baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.386842 4698 scope.go:117] "RemoveContainer" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387068 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} err="failed to get container status \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": rpc error: code = NotFound desc = could not find container \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": container with ID starting with 90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387099 4698 scope.go:117] "RemoveContainer" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387393 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} err="failed to get container status \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": rpc error: code = NotFound desc = could not find container \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": container with ID starting with 11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387422 4698 scope.go:117] "RemoveContainer" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387686 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} err="failed to get container status \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": rpc error: code = NotFound desc = could not find container \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": container with ID starting with 0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.387715 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388029 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} err="failed to get container status \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388074 4698 scope.go:117] "RemoveContainer" containerID="6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388381 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2"} err="failed to get container status \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": rpc error: code = NotFound desc = could not find container \"6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2\": container with ID starting with 6c7231fbb7fd473f2878319c720ec92ea8eade012d5912e8ad80355542ffa3d2 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388407 4698 scope.go:117] "RemoveContainer" containerID="626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388717 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95"} err="failed to get container status \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": rpc error: code = NotFound desc = could not find container \"626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95\": container with ID starting with 626a1ca2a8aa5d6c1b4078a0bc7b0aa540656abcedd8be206fbb93cdc96c1d95 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.388744 4698 scope.go:117] "RemoveContainer" containerID="7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390272 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118"} err="failed to get container status \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": rpc error: code = NotFound desc = could not find container \"7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118\": container with ID starting with 7e5759411e3d5ea6fa7515b4cd1255735c870b8c035fe586a38d2f436b752118 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390320 4698 scope.go:117] "RemoveContainer" containerID="a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390594 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db"} err="failed to get container status \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": rpc error: code = NotFound desc = could not find container \"a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db\": container with ID starting with a6e9733c3f4d25662c9da6201d0feee797de24acebb7d6e7c67d72709aae95db not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390619 4698 scope.go:117] "RemoveContainer" containerID="876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390899 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5"} err="failed to get container status \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": rpc error: code = NotFound desc = could not find container \"876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5\": container with ID starting with 876998f4dcb79a6bc0faa35871730f5d2f9a07f94baf6da29ed9a9e794705ee5 not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.390924 4698 scope.go:117] "RemoveContainer" containerID="baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.391259 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e"} err="failed to get container status \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": rpc error: code = NotFound desc = could not find container \"baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e\": container with ID starting with baefa979aee6f8fbe41df9542964f3c7b2a4331ec5111d2f622ec7ca41a0d96e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.391290 4698 scope.go:117] "RemoveContainer" containerID="90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.392426 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e"} err="failed to get container status \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": rpc error: code = NotFound desc = could not find container \"90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e\": container with ID starting with 90b4b75613b321be60e549ab6ab2eaaab57fd6b49aa01f5254a0e83ac2e0167e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.392459 4698 scope.go:117] "RemoveContainer" containerID="11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.392667 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e"} err="failed to get container status \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": rpc error: code = NotFound desc = could not find container \"11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e\": container with ID starting with 11274a26e2292cdf2b793e6290657ed3cc737d1454b39f33b8f0272f67f70f3e not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.392695 4698 scope.go:117] "RemoveContainer" containerID="0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.393072 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b"} err="failed to get container status \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": rpc error: code = NotFound desc = could not find container \"0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b\": container with ID starting with 0f7d7f89348e04b617ba4ec3cf76f4fefdd36492c656aed8141f3775a7ca8e3b not found: ID does not exist" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.393095 4698 scope.go:117] "RemoveContainer" containerID="69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6" Oct 14 10:06:37 crc kubenswrapper[4698]: I1014 10:06:37.393440 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6"} err="failed to get container status \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": rpc error: code = NotFound desc = could not find container \"69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6\": container with ID starting with 69b90c9c0473f0a1add0c188fd07ba784e387a9fa7de413d857cde1946f25dc6 not found: ID does not exist" Oct 14 10:06:38 crc kubenswrapper[4698]: I1014 10:06:38.173743 4698 generic.go:334] "Generic (PLEG): container finished" podID="c28c0915-2a70-4992-8b12-e9e3d5ba8ab6" containerID="8ef1f5d3fe4177c4f0cdaec9c67f152fd1b1f175d9566fcd9adf894a28a9e571" exitCode=0 Oct 14 10:06:38 crc kubenswrapper[4698]: I1014 10:06:38.173863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerDied","Data":"8ef1f5d3fe4177c4f0cdaec9c67f152fd1b1f175d9566fcd9adf894a28a9e571"} Oct 14 10:06:38 crc kubenswrapper[4698]: I1014 10:06:38.174205 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"422c9c61d568fb1eaaa72ca06b833651d8bb685b5c7c30a7d331cf0686670054"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.031153 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02f5359-81fc-4261-b995-e58c78bcec0e" path="/var/lib/kubelet/pods/d02f5359-81fc-4261-b995-e58c78bcec0e/volumes" Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184800 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"f9f7e3c976ffabc1b9c94f9189e8d7d35d02010ef5149d5b7d457be74427a305"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184849 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"63c892026a9aed0a07c757c57f8f3cdaf3bff94fa31b427205fde7b109fbdb57"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"24d97d1b7b8f319df40c4e00a2255b3de083bbb35a71e8b1c87a3169b82c00cd"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184874 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"8261027d70805474175c157faf3acbc105e37bd7adb70313e7e905afc574840c"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184884 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"224150fe70ba7f0a35ffcd43a4ab12412491dac1d86873e548a346e42ce6795a"} Oct 14 10:06:39 crc kubenswrapper[4698]: I1014 10:06:39.184896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"ec36b8f589b7b96e4a1c642f9f3e2b4eead272cd9651a217ab8bd7b16d29f990"} Oct 14 10:06:42 crc kubenswrapper[4698]: I1014 10:06:42.207826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"4086aba17dc5a3f63ebddd5820ed64c485410ca5fc83331e6a3eb551ab6f7081"} Oct 14 10:06:44 crc kubenswrapper[4698]: I1014 10:06:44.227201 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" event={"ID":"c28c0915-2a70-4992-8b12-e9e3d5ba8ab6","Type":"ContainerStarted","Data":"950cdc54d3fbe9c3737f61c5024a6481d72bb6909555e58f07e1251e020f0244"} Oct 14 10:06:44 crc kubenswrapper[4698]: I1014 10:06:44.227701 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:44 crc kubenswrapper[4698]: I1014 10:06:44.253421 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:44 crc kubenswrapper[4698]: I1014 10:06:44.259512 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" podStartSLOduration=8.259501635 podStartE2EDuration="8.259501635s" podCreationTimestamp="2025-10-14 10:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:06:44.25798309 +0000 UTC m=+585.955282526" watchObservedRunningTime="2025-10-14 10:06:44.259501635 +0000 UTC m=+585.956801051" Oct 14 10:06:45 crc kubenswrapper[4698]: I1014 10:06:45.232659 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:45 crc kubenswrapper[4698]: I1014 10:06:45.232740 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:45 crc kubenswrapper[4698]: I1014 10:06:45.262418 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:06:50 crc kubenswrapper[4698]: I1014 10:06:50.017058 4698 scope.go:117] "RemoveContainer" containerID="229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe" Oct 14 10:06:50 crc kubenswrapper[4698]: E1014 10:06:50.017718 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b7cbk_openshift-multus(fbf10bbc-318d-4f46-83a0-fdbad9888201)\"" pod="openshift-multus/multus-b7cbk" podUID="fbf10bbc-318d-4f46-83a0-fdbad9888201" Oct 14 10:06:53 crc kubenswrapper[4698]: I1014 10:06:53.908200 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:06:53 crc kubenswrapper[4698]: I1014 10:06:53.908561 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.242130 4698 scope.go:117] "RemoveContainer" containerID="52c9a8ccad3eed5af66c0178544ca46fabcfaab76d88471b2a62606f2f860522" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.296350 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.297362 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.300447 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.300962 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.300964 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-htj7v" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.329606 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/2.log" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.428598 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-data\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.428691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-log\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.428833 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-run\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.428866 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5nrm\" (UniqueName: \"kubernetes.io/projected/fab31a39-0774-45d5-a5cd-cc337066aa80-kube-api-access-j5nrm\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.530630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-run\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.530737 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5nrm\" (UniqueName: \"kubernetes.io/projected/fab31a39-0774-45d5-a5cd-cc337066aa80-kube-api-access-j5nrm\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.531127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-data\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.531195 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-run\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.531202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-log\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.531742 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-data\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.532054 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/fab31a39-0774-45d5-a5cd-cc337066aa80-log\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.565599 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5nrm\" (UniqueName: \"kubernetes.io/projected/fab31a39-0774-45d5-a5cd-cc337066aa80-kube-api-access-j5nrm\") pod \"ceph\" (UID: \"fab31a39-0774-45d5-a5cd-cc337066aa80\") " pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: I1014 10:06:59.620547 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Oct 14 10:06:59 crc kubenswrapper[4698]: W1014 10:06:59.649972 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab31a39_0774_45d5_a5cd_cc337066aa80.slice/crio-2964ffabefa6cb496bd31f0c1706765036c769fa78d0acab18a34e9686f3d946 WatchSource:0}: Error finding container 2964ffabefa6cb496bd31f0c1706765036c769fa78d0acab18a34e9686f3d946: Status 404 returned error can't find the container with id 2964ffabefa6cb496bd31f0c1706765036c769fa78d0acab18a34e9686f3d946 Oct 14 10:07:00 crc kubenswrapper[4698]: I1014 10:07:00.337228 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"fab31a39-0774-45d5-a5cd-cc337066aa80","Type":"ContainerStarted","Data":"2964ffabefa6cb496bd31f0c1706765036c769fa78d0acab18a34e9686f3d946"} Oct 14 10:07:04 crc kubenswrapper[4698]: I1014 10:07:04.017260 4698 scope.go:117] "RemoveContainer" containerID="229bd4cd219e41d476b3856b757a9ed7e76bd1f073deb35fb68c0de19dbc7bfe" Oct 14 10:07:04 crc kubenswrapper[4698]: I1014 10:07:04.365087 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b7cbk_fbf10bbc-318d-4f46-83a0-fdbad9888201/kube-multus/2.log" Oct 14 10:07:04 crc kubenswrapper[4698]: I1014 10:07:04.365614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b7cbk" event={"ID":"fbf10bbc-318d-4f46-83a0-fdbad9888201","Type":"ContainerStarted","Data":"b6060ffedb91fbd409d89b6a05fc956cf85b906188019a58427c0a670dfbfd71"} Oct 14 10:07:07 crc kubenswrapper[4698]: I1014 10:07:07.205271 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdmsh" Oct 14 10:07:19 crc kubenswrapper[4698]: E1014 10:07:19.125838 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Oct 14 10:07:19 crc kubenswrapper[4698]: E1014 10:07:19.126247 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5nrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(fab31a39-0774-45d5-a5cd-cc337066aa80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:07:19 crc kubenswrapper[4698]: E1014 10:07:19.127592 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="fab31a39-0774-45d5-a5cd-cc337066aa80" Oct 14 10:07:19 crc kubenswrapper[4698]: E1014 10:07:19.447097 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="fab31a39-0774-45d5-a5cd-cc337066aa80" Oct 14 10:07:23 crc kubenswrapper[4698]: I1014 10:07:23.907811 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:07:23 crc kubenswrapper[4698]: I1014 10:07:23.909256 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:07:23 crc kubenswrapper[4698]: I1014 10:07:23.909448 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:07:23 crc kubenswrapper[4698]: I1014 10:07:23.910167 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:07:23 crc kubenswrapper[4698]: I1014 10:07:23.910362 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5" gracePeriod=600 Oct 14 10:07:24 crc kubenswrapper[4698]: I1014 10:07:24.477850 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5" exitCode=0 Oct 14 10:07:24 crc kubenswrapper[4698]: I1014 10:07:24.477916 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5"} Oct 14 10:07:24 crc kubenswrapper[4698]: I1014 10:07:24.478231 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7"} Oct 14 10:07:24 crc kubenswrapper[4698]: I1014 10:07:24.478247 4698 scope.go:117] "RemoveContainer" containerID="023a5dc316be8020894e4c5c93e1b936d78922591d1d7856a49d373ddec6f38b" Oct 14 10:07:36 crc kubenswrapper[4698]: I1014 10:07:36.567706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"fab31a39-0774-45d5-a5cd-cc337066aa80","Type":"ContainerStarted","Data":"a04b95e08fb241fc3e5c67fb3599c610a00852db8271291213f8eeef393c9e0d"} Oct 14 10:07:36 crc kubenswrapper[4698]: I1014 10:07:36.592522 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=1.741281024 podStartE2EDuration="37.592493099s" podCreationTimestamp="2025-10-14 10:06:59 +0000 UTC" firstStartedPulling="2025-10-14 10:06:59.653061414 +0000 UTC m=+601.350360850" lastFinishedPulling="2025-10-14 10:07:35.504273449 +0000 UTC m=+637.201572925" observedRunningTime="2025-10-14 10:07:36.589532511 +0000 UTC m=+638.286831977" watchObservedRunningTime="2025-10-14 10:07:36.592493099 +0000 UTC m=+638.289792565" Oct 14 10:08:25 crc kubenswrapper[4698]: E1014 10:08:25.882053 4698 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.188:57462->38.102.83.188:44569: read tcp 38.102.83.188:57462->38.102.83.188:44569: read: connection reset by peer Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.088687 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd"] Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.090758 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.094683 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.102897 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd"] Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.244446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.244505 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.244539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcbv\" (UniqueName: \"kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.347086 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.347145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.347175 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcbv\" (UniqueName: \"kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.347857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.348239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.366699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcbv\" (UniqueName: \"kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.417125 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:48 crc kubenswrapper[4698]: I1014 10:08:48.689529 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd"] Oct 14 10:08:49 crc kubenswrapper[4698]: I1014 10:08:49.045518 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerStarted","Data":"f81c3c51d5e83c6d8d529706376f418fb9c3b3257f8afb85f6854f6f2c790b12"} Oct 14 10:08:49 crc kubenswrapper[4698]: I1014 10:08:49.045953 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerStarted","Data":"6c14f354697bda7c46da541e3719e06b440ac3cf6ea57b80681e28a03eea6eb8"} Oct 14 10:08:50 crc kubenswrapper[4698]: I1014 10:08:50.057395 4698 generic.go:334] "Generic (PLEG): container finished" podID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerID="f81c3c51d5e83c6d8d529706376f418fb9c3b3257f8afb85f6854f6f2c790b12" exitCode=0 Oct 14 10:08:50 crc kubenswrapper[4698]: I1014 10:08:50.057448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerDied","Data":"f81c3c51d5e83c6d8d529706376f418fb9c3b3257f8afb85f6854f6f2c790b12"} Oct 14 10:08:52 crc kubenswrapper[4698]: I1014 10:08:52.077578 4698 generic.go:334] "Generic (PLEG): container finished" podID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerID="54e523f7711f70e2025eb2031b703e0245a67526dec6d5392bc3ea03d4bbbd31" exitCode=0 Oct 14 10:08:52 crc kubenswrapper[4698]: I1014 10:08:52.077680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerDied","Data":"54e523f7711f70e2025eb2031b703e0245a67526dec6d5392bc3ea03d4bbbd31"} Oct 14 10:08:53 crc kubenswrapper[4698]: I1014 10:08:53.088089 4698 generic.go:334] "Generic (PLEG): container finished" podID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerID="a6f4341bfecc95b882b5f40eee86085f297d1ba2fe07c6d74126bcf249a0107b" exitCode=0 Oct 14 10:08:53 crc kubenswrapper[4698]: I1014 10:08:53.088365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerDied","Data":"a6f4341bfecc95b882b5f40eee86085f297d1ba2fe07c6d74126bcf249a0107b"} Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.419244 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.543942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle\") pod \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.544052 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util\") pod \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.544086 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmcbv\" (UniqueName: \"kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv\") pod \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\" (UID: \"51f6a7d5-1c06-4f2b-9f66-322882e6db29\") " Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.544832 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle" (OuterVolumeSpecName: "bundle") pod "51f6a7d5-1c06-4f2b-9f66-322882e6db29" (UID: "51f6a7d5-1c06-4f2b-9f66-322882e6db29"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.552055 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv" (OuterVolumeSpecName: "kube-api-access-xmcbv") pod "51f6a7d5-1c06-4f2b-9f66-322882e6db29" (UID: "51f6a7d5-1c06-4f2b-9f66-322882e6db29"). InnerVolumeSpecName "kube-api-access-xmcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.557240 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util" (OuterVolumeSpecName: "util") pod "51f6a7d5-1c06-4f2b-9f66-322882e6db29" (UID: "51f6a7d5-1c06-4f2b-9f66-322882e6db29"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.645682 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.645718 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6a7d5-1c06-4f2b-9f66-322882e6db29-util\") on node \"crc\" DevicePath \"\"" Oct 14 10:08:54 crc kubenswrapper[4698]: I1014 10:08:54.645730 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmcbv\" (UniqueName: \"kubernetes.io/projected/51f6a7d5-1c06-4f2b-9f66-322882e6db29-kube-api-access-xmcbv\") on node \"crc\" DevicePath \"\"" Oct 14 10:08:55 crc kubenswrapper[4698]: I1014 10:08:55.108171 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" event={"ID":"51f6a7d5-1c06-4f2b-9f66-322882e6db29","Type":"ContainerDied","Data":"6c14f354697bda7c46da541e3719e06b440ac3cf6ea57b80681e28a03eea6eb8"} Oct 14 10:08:55 crc kubenswrapper[4698]: I1014 10:08:55.108229 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c14f354697bda7c46da541e3719e06b440ac3cf6ea57b80681e28a03eea6eb8" Oct 14 10:08:55 crc kubenswrapper[4698]: I1014 10:08:55.108282 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.684699 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-669hf"] Oct 14 10:08:59 crc kubenswrapper[4698]: E1014 10:08:59.685847 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="extract" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.685864 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="extract" Oct 14 10:08:59 crc kubenswrapper[4698]: E1014 10:08:59.685878 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="pull" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.685886 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="pull" Oct 14 10:08:59 crc kubenswrapper[4698]: E1014 10:08:59.685904 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="util" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.685914 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="util" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.686037 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6a7d5-1c06-4f2b-9f66-322882e6db29" containerName="extract" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.686480 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.690096 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4zqfc" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.693478 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.694657 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-669hf"] Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.717400 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.823337 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzwv\" (UniqueName: \"kubernetes.io/projected/fd11f615-dce1-42f4-8470-d1117fe3305b-kube-api-access-jzzwv\") pod \"nmstate-operator-858ddd8f98-669hf\" (UID: \"fd11f615-dce1-42f4-8470-d1117fe3305b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.924046 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzzwv\" (UniqueName: \"kubernetes.io/projected/fd11f615-dce1-42f4-8470-d1117fe3305b-kube-api-access-jzzwv\") pod \"nmstate-operator-858ddd8f98-669hf\" (UID: \"fd11f615-dce1-42f4-8470-d1117fe3305b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" Oct 14 10:08:59 crc kubenswrapper[4698]: I1014 10:08:59.950658 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzzwv\" (UniqueName: \"kubernetes.io/projected/fd11f615-dce1-42f4-8470-d1117fe3305b-kube-api-access-jzzwv\") pod \"nmstate-operator-858ddd8f98-669hf\" (UID: \"fd11f615-dce1-42f4-8470-d1117fe3305b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" Oct 14 10:09:00 crc kubenswrapper[4698]: I1014 10:09:00.044395 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" Oct 14 10:09:00 crc kubenswrapper[4698]: I1014 10:09:00.327616 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-669hf"] Oct 14 10:09:01 crc kubenswrapper[4698]: I1014 10:09:01.159613 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" event={"ID":"fd11f615-dce1-42f4-8470-d1117fe3305b","Type":"ContainerStarted","Data":"c42628d8c1d12ceda978ba441525fb30851a96094f03abbfbea6867bcc1bcc3a"} Oct 14 10:09:04 crc kubenswrapper[4698]: I1014 10:09:04.183918 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" event={"ID":"fd11f615-dce1-42f4-8470-d1117fe3305b","Type":"ContainerStarted","Data":"024dff127b9dab19d63f0a4c1a3996e21130291449be3c531e462e426ce6ef88"} Oct 14 10:09:04 crc kubenswrapper[4698]: I1014 10:09:04.198886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-669hf" podStartSLOduration=2.464047903 podStartE2EDuration="5.198867341s" podCreationTimestamp="2025-10-14 10:08:59 +0000 UTC" firstStartedPulling="2025-10-14 10:09:00.351234415 +0000 UTC m=+722.048533831" lastFinishedPulling="2025-10-14 10:09:03.086053853 +0000 UTC m=+724.783353269" observedRunningTime="2025-10-14 10:09:04.197082779 +0000 UTC m=+725.894382195" watchObservedRunningTime="2025-10-14 10:09:04.198867341 +0000 UTC m=+725.896166757" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.675581 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.677242 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.679401 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vdkrt" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.683273 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.684060 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.685515 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c"] Oct 14 10:09:08 crc kubenswrapper[4698]: W1014 10:09:08.685565 4698 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: secrets "openshift-nmstate-webhook" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Oct 14 10:09:08 crc kubenswrapper[4698]: E1014 10:09:08.685591 4698 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-nmstate-webhook\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.720696 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tb9f6"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.721410 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.721508 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lqn\" (UniqueName: \"kubernetes.io/projected/3da8a241-b3ad-480d-aff7-f571b43fb673-kube-api-access-l7lqn\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771652 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-nmstate-lock\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771747 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-dbus-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3da8a241-b3ad-480d-aff7-f571b43fb673-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771863 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/359df233-fb7a-4d84-888a-d6fa99ed8b55-kube-api-access-9fz5f\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771901 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdzpz\" (UniqueName: \"kubernetes.io/projected/a1d21132-5dfd-4813-9c39-d4be39666a38-kube-api-access-pdzpz\") pod \"nmstate-metrics-fdff9cb8d-nq56c\" (UID: \"a1d21132-5dfd-4813-9c39-d4be39666a38\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.771930 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-ovs-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.812180 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.813018 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.815251 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.815420 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hpw5f" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.815424 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.828142 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv"] Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872598 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lqn\" (UniqueName: \"kubernetes.io/projected/3da8a241-b3ad-480d-aff7-f571b43fb673-kube-api-access-l7lqn\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872674 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-nmstate-lock\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872707 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afaa96d5-b448-47b4-ac36-b8d4d232441b-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-dbus-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872792 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3da8a241-b3ad-480d-aff7-f571b43fb673-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872823 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrg5\" (UniqueName: \"kubernetes.io/projected/afaa96d5-b448-47b4-ac36-b8d4d232441b-kube-api-access-4wrg5\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/359df233-fb7a-4d84-888a-d6fa99ed8b55-kube-api-access-9fz5f\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872877 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872915 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdzpz\" (UniqueName: \"kubernetes.io/projected/a1d21132-5dfd-4813-9c39-d4be39666a38-kube-api-access-pdzpz\") pod \"nmstate-metrics-fdff9cb8d-nq56c\" (UID: \"a1d21132-5dfd-4813-9c39-d4be39666a38\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.872944 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-ovs-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.873017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-ovs-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.873381 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-nmstate-lock\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.873727 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/359df233-fb7a-4d84-888a-d6fa99ed8b55-dbus-socket\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.890777 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdzpz\" (UniqueName: \"kubernetes.io/projected/a1d21132-5dfd-4813-9c39-d4be39666a38-kube-api-access-pdzpz\") pod \"nmstate-metrics-fdff9cb8d-nq56c\" (UID: \"a1d21132-5dfd-4813-9c39-d4be39666a38\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.890780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/359df233-fb7a-4d84-888a-d6fa99ed8b55-kube-api-access-9fz5f\") pod \"nmstate-handler-tb9f6\" (UID: \"359df233-fb7a-4d84-888a-d6fa99ed8b55\") " pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.894019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lqn\" (UniqueName: \"kubernetes.io/projected/3da8a241-b3ad-480d-aff7-f571b43fb673-kube-api-access-l7lqn\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.973573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrg5\" (UniqueName: \"kubernetes.io/projected/afaa96d5-b448-47b4-ac36-b8d4d232441b-kube-api-access-4wrg5\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.973624 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.973685 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afaa96d5-b448-47b4-ac36-b8d4d232441b-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.974494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afaa96d5-b448-47b4-ac36-b8d4d232441b-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:08 crc kubenswrapper[4698]: E1014 10:09:08.975066 4698 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 14 10:09:08 crc kubenswrapper[4698]: E1014 10:09:08.975730 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert podName:afaa96d5-b448-47b4-ac36-b8d4d232441b nodeName:}" failed. No retries permitted until 2025-10-14 10:09:09.475706203 +0000 UTC m=+731.173005619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-5zqgv" (UID: "afaa96d5-b448-47b4-ac36-b8d4d232441b") : secret "plugin-serving-cert" not found Oct 14 10:09:08 crc kubenswrapper[4698]: I1014 10:09:08.996907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrg5\" (UniqueName: \"kubernetes.io/projected/afaa96d5-b448-47b4-ac36-b8d4d232441b-kube-api-access-4wrg5\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.012482 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55fd867dc-k56rw"] Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.013898 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.014988 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.053396 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.055076 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55fd867dc-k56rw"] Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.176825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-trusted-ca-bundle\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177290 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-oauth-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177335 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-oauth-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-service-ca\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177443 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gdt\" (UniqueName: \"kubernetes.io/projected/801fe5ae-4f55-484e-ab07-240ca66bc735-kube-api-access-68gdt\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.177515 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-console-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.223525 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tb9f6" event={"ID":"359df233-fb7a-4d84-888a-d6fa99ed8b55","Type":"ContainerStarted","Data":"ffe47cd2be7c53a5c41a23c238253060cb59b13dde0813543e3268fcda1de6df"} Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279065 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-oauth-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-service-ca\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gdt\" (UniqueName: \"kubernetes.io/projected/801fe5ae-4f55-484e-ab07-240ca66bc735-kube-api-access-68gdt\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-console-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279436 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-trusted-ca-bundle\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.279498 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.280632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-oauth-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.280432 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-trusted-ca-bundle\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.280511 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-service-ca\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.280880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-console-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.281794 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/801fe5ae-4f55-484e-ab07-240ca66bc735-oauth-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.283380 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-oauth-config\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.283388 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/801fe5ae-4f55-484e-ab07-240ca66bc735-console-serving-cert\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.296866 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gdt\" (UniqueName: \"kubernetes.io/projected/801fe5ae-4f55-484e-ab07-240ca66bc735-kube-api-access-68gdt\") pod \"console-55fd867dc-k56rw\" (UID: \"801fe5ae-4f55-484e-ab07-240ca66bc735\") " pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.391824 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.477513 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c"] Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.483788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.487610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afaa96d5-b448-47b4-ac36-b8d4d232441b-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-5zqgv\" (UID: \"afaa96d5-b448-47b4-ac36-b8d4d232441b\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:09 crc kubenswrapper[4698]: W1014 10:09:09.487887 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d21132_5dfd_4813_9c39_d4be39666a38.slice/crio-0f6aa3cdca3eafa3711d68690c9636784d36d8c322e5291a5f85272f5cd15d71 WatchSource:0}: Error finding container 0f6aa3cdca3eafa3711d68690c9636784d36d8c322e5291a5f85272f5cd15d71: Status 404 returned error can't find the container with id 0f6aa3cdca3eafa3711d68690c9636784d36d8c322e5291a5f85272f5cd15d71 Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.609493 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.620421 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3da8a241-b3ad-480d-aff7-f571b43fb673-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-8vhjp\" (UID: \"3da8a241-b3ad-480d-aff7-f571b43fb673\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.627924 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.648335 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55fd867dc-k56rw"] Oct 14 10:09:09 crc kubenswrapper[4698]: W1014 10:09:09.655090 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod801fe5ae_4f55_484e_ab07_240ca66bc735.slice/crio-ad01fb40c479323cda4af81dfc333164b53d47e93453e023526302e4ad760dd1 WatchSource:0}: Error finding container ad01fb40c479323cda4af81dfc333164b53d47e93453e023526302e4ad760dd1: Status 404 returned error can't find the container with id ad01fb40c479323cda4af81dfc333164b53d47e93453e023526302e4ad760dd1 Oct 14 10:09:09 crc kubenswrapper[4698]: I1014 10:09:09.737040 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.028096 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp"] Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.173990 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv"] Oct 14 10:09:10 crc kubenswrapper[4698]: W1014 10:09:10.183878 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafaa96d5_b448_47b4_ac36_b8d4d232441b.slice/crio-e2e353f495c596ec2c8216f1ac380d826ecfb3fb823772081238256cda17e09e WatchSource:0}: Error finding container e2e353f495c596ec2c8216f1ac380d826ecfb3fb823772081238256cda17e09e: Status 404 returned error can't find the container with id e2e353f495c596ec2c8216f1ac380d826ecfb3fb823772081238256cda17e09e Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.232972 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fd867dc-k56rw" event={"ID":"801fe5ae-4f55-484e-ab07-240ca66bc735","Type":"ContainerStarted","Data":"4e5fe432d0c805e706d1a3a912e7db2bde553e6df4ffb5de84bf534d89d4ab7d"} Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.233032 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55fd867dc-k56rw" event={"ID":"801fe5ae-4f55-484e-ab07-240ca66bc735","Type":"ContainerStarted","Data":"ad01fb40c479323cda4af81dfc333164b53d47e93453e023526302e4ad760dd1"} Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.235055 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" event={"ID":"3da8a241-b3ad-480d-aff7-f571b43fb673","Type":"ContainerStarted","Data":"0d7d2e399cf1d310ec97cc6d4520ea7fd7e97ff316952793d7c5379bc050b084"} Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.237732 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" event={"ID":"a1d21132-5dfd-4813-9c39-d4be39666a38","Type":"ContainerStarted","Data":"0f6aa3cdca3eafa3711d68690c9636784d36d8c322e5291a5f85272f5cd15d71"} Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.239403 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" event={"ID":"afaa96d5-b448-47b4-ac36-b8d4d232441b","Type":"ContainerStarted","Data":"e2e353f495c596ec2c8216f1ac380d826ecfb3fb823772081238256cda17e09e"} Oct 14 10:09:10 crc kubenswrapper[4698]: I1014 10:09:10.262162 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55fd867dc-k56rw" podStartSLOduration=2.262141713 podStartE2EDuration="2.262141713s" podCreationTimestamp="2025-10-14 10:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:09:10.261058962 +0000 UTC m=+731.958358388" watchObservedRunningTime="2025-10-14 10:09:10.262141713 +0000 UTC m=+731.959441139" Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.251833 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tb9f6" event={"ID":"359df233-fb7a-4d84-888a-d6fa99ed8b55","Type":"ContainerStarted","Data":"4a868e06591062e95f3c14d73dfbff763989733c163b7a867f75a6b0159091ef"} Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.252459 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.256310 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" event={"ID":"3da8a241-b3ad-480d-aff7-f571b43fb673","Type":"ContainerStarted","Data":"c508160bbfb6458f24872cc156680d18d1d7938e13c563753f9d9fa36bd277b1"} Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.256450 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.258007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" event={"ID":"a1d21132-5dfd-4813-9c39-d4be39666a38","Type":"ContainerStarted","Data":"e2ed878154cceac9dc97825df0e296f12d05570af759d303c5d19142b68c3b64"} Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.273366 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tb9f6" podStartSLOduration=1.656485291 podStartE2EDuration="4.273345574s" podCreationTimestamp="2025-10-14 10:09:08 +0000 UTC" firstStartedPulling="2025-10-14 10:09:09.082857208 +0000 UTC m=+730.780156624" lastFinishedPulling="2025-10-14 10:09:11.699717491 +0000 UTC m=+733.397016907" observedRunningTime="2025-10-14 10:09:12.270830482 +0000 UTC m=+733.968129928" watchObservedRunningTime="2025-10-14 10:09:12.273345574 +0000 UTC m=+733.970645010" Oct 14 10:09:12 crc kubenswrapper[4698]: I1014 10:09:12.293369 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" podStartSLOduration=2.601574844 podStartE2EDuration="4.292968567s" podCreationTimestamp="2025-10-14 10:09:08 +0000 UTC" firstStartedPulling="2025-10-14 10:09:10.034480579 +0000 UTC m=+731.731780035" lastFinishedPulling="2025-10-14 10:09:11.725874342 +0000 UTC m=+733.423173758" observedRunningTime="2025-10-14 10:09:12.286392628 +0000 UTC m=+733.983692114" watchObservedRunningTime="2025-10-14 10:09:12.292968567 +0000 UTC m=+733.990267983" Oct 14 10:09:14 crc kubenswrapper[4698]: I1014 10:09:14.273267 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" event={"ID":"afaa96d5-b448-47b4-ac36-b8d4d232441b","Type":"ContainerStarted","Data":"be9a2de5ce885b8b7ebb6d4c92ac16d114977344738f12756d03d8682f2db819"} Oct 14 10:09:16 crc kubenswrapper[4698]: I1014 10:09:16.289867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" event={"ID":"a1d21132-5dfd-4813-9c39-d4be39666a38","Type":"ContainerStarted","Data":"0da6e8d956e4d2fe5ce66b93661d8939224d419e7eb62dfd03d179d4c42ef871"} Oct 14 10:09:16 crc kubenswrapper[4698]: I1014 10:09:16.315670 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-nq56c" podStartSLOduration=1.7911699049999998 podStartE2EDuration="8.315644095s" podCreationTimestamp="2025-10-14 10:09:08 +0000 UTC" firstStartedPulling="2025-10-14 10:09:09.492197986 +0000 UTC m=+731.189497402" lastFinishedPulling="2025-10-14 10:09:16.016672146 +0000 UTC m=+737.713971592" observedRunningTime="2025-10-14 10:09:16.314135452 +0000 UTC m=+738.011434898" watchObservedRunningTime="2025-10-14 10:09:16.315644095 +0000 UTC m=+738.012943551" Oct 14 10:09:16 crc kubenswrapper[4698]: I1014 10:09:16.325503 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-5zqgv" podStartSLOduration=5.284464852 podStartE2EDuration="8.325476857s" podCreationTimestamp="2025-10-14 10:09:08 +0000 UTC" firstStartedPulling="2025-10-14 10:09:10.186598515 +0000 UTC m=+731.883897941" lastFinishedPulling="2025-10-14 10:09:13.22761053 +0000 UTC m=+734.924909946" observedRunningTime="2025-10-14 10:09:14.29939303 +0000 UTC m=+735.996692466" watchObservedRunningTime="2025-10-14 10:09:16.325476857 +0000 UTC m=+738.022776313" Oct 14 10:09:19 crc kubenswrapper[4698]: I1014 10:09:19.084894 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tb9f6" Oct 14 10:09:19 crc kubenswrapper[4698]: I1014 10:09:19.392174 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:19 crc kubenswrapper[4698]: I1014 10:09:19.393074 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:19 crc kubenswrapper[4698]: I1014 10:09:19.400749 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:20 crc kubenswrapper[4698]: I1014 10:09:20.325616 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55fd867dc-k56rw" Oct 14 10:09:20 crc kubenswrapper[4698]: I1014 10:09:20.440658 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.449136 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.449863 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" podUID="67e52335-6348-488a-a36a-8971b953737b" containerName="controller-manager" containerID="cri-o://7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4" gracePeriod=30 Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.547693 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.547927 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" containerID="cri-o://313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f" gracePeriod=30 Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.882378 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 10:09:24 crc kubenswrapper[4698]: I1014 10:09:24.976579 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.005182 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca\") pod \"67e52335-6348-488a-a36a-8971b953737b\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.005239 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert\") pod \"67e52335-6348-488a-a36a-8971b953737b\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.005302 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqxvq\" (UniqueName: \"kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq\") pod \"67e52335-6348-488a-a36a-8971b953737b\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.005392 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles\") pod \"67e52335-6348-488a-a36a-8971b953737b\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.005464 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config\") pod \"67e52335-6348-488a-a36a-8971b953737b\" (UID: \"67e52335-6348-488a-a36a-8971b953737b\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.006136 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca" (OuterVolumeSpecName: "client-ca") pod "67e52335-6348-488a-a36a-8971b953737b" (UID: "67e52335-6348-488a-a36a-8971b953737b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.006460 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config" (OuterVolumeSpecName: "config") pod "67e52335-6348-488a-a36a-8971b953737b" (UID: "67e52335-6348-488a-a36a-8971b953737b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.006474 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "67e52335-6348-488a-a36a-8971b953737b" (UID: "67e52335-6348-488a-a36a-8971b953737b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.014900 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67e52335-6348-488a-a36a-8971b953737b" (UID: "67e52335-6348-488a-a36a-8971b953737b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.015129 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq" (OuterVolumeSpecName: "kube-api-access-lqxvq") pod "67e52335-6348-488a-a36a-8971b953737b" (UID: "67e52335-6348-488a-a36a-8971b953737b"). InnerVolumeSpecName "kube-api-access-lqxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106466 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert\") pod \"d746febc-7247-498c-86b1-8cb4640cbccc\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106553 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config\") pod \"d746febc-7247-498c-86b1-8cb4640cbccc\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106643 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrmtx\" (UniqueName: \"kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx\") pod \"d746febc-7247-498c-86b1-8cb4640cbccc\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106674 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca\") pod \"d746febc-7247-498c-86b1-8cb4640cbccc\" (UID: \"d746febc-7247-498c-86b1-8cb4640cbccc\") " Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106967 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.106992 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e52335-6348-488a-a36a-8971b953737b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.107007 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqxvq\" (UniqueName: \"kubernetes.io/projected/67e52335-6348-488a-a36a-8971b953737b-kube-api-access-lqxvq\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.107021 4698 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.107033 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e52335-6348-488a-a36a-8971b953737b-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.107566 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca" (OuterVolumeSpecName: "client-ca") pod "d746febc-7247-498c-86b1-8cb4640cbccc" (UID: "d746febc-7247-498c-86b1-8cb4640cbccc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.107851 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config" (OuterVolumeSpecName: "config") pod "d746febc-7247-498c-86b1-8cb4640cbccc" (UID: "d746febc-7247-498c-86b1-8cb4640cbccc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.110755 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx" (OuterVolumeSpecName: "kube-api-access-rrmtx") pod "d746febc-7247-498c-86b1-8cb4640cbccc" (UID: "d746febc-7247-498c-86b1-8cb4640cbccc"). InnerVolumeSpecName "kube-api-access-rrmtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.111887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d746febc-7247-498c-86b1-8cb4640cbccc" (UID: "d746febc-7247-498c-86b1-8cb4640cbccc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.208321 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrmtx\" (UniqueName: \"kubernetes.io/projected/d746febc-7247-498c-86b1-8cb4640cbccc-kube-api-access-rrmtx\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.208375 4698 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.208389 4698 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d746febc-7247-498c-86b1-8cb4640cbccc-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.208398 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d746febc-7247-498c-86b1-8cb4640cbccc-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.264515 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c46ffdd96-m552j"] Oct 14 10:09:25 crc kubenswrapper[4698]: E1014 10:09:25.264732 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e52335-6348-488a-a36a-8971b953737b" containerName="controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.264745 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e52335-6348-488a-a36a-8971b953737b" containerName="controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: E1014 10:09:25.264784 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.264806 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.264920 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e52335-6348-488a-a36a-8971b953737b" containerName="controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.264937 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" containerName="route-controller-manager" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.265327 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.276809 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.278391 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.314555 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.334704 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c46ffdd96-m552j"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.355069 4698 generic.go:334] "Generic (PLEG): container finished" podID="67e52335-6348-488a-a36a-8971b953737b" containerID="7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4" exitCode=0 Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.355144 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" event={"ID":"67e52335-6348-488a-a36a-8971b953737b","Type":"ContainerDied","Data":"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4"} Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.355153 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.355171 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c6gg9" event={"ID":"67e52335-6348-488a-a36a-8971b953737b","Type":"ContainerDied","Data":"061b540243c73d30661b5e59bb07819d7ec3f972166e42882fbe99531f2baeaa"} Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.355190 4698 scope.go:117] "RemoveContainer" containerID="7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.358199 4698 generic.go:334] "Generic (PLEG): container finished" podID="d746febc-7247-498c-86b1-8cb4640cbccc" containerID="313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f" exitCode=0 Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.358249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" event={"ID":"d746febc-7247-498c-86b1-8cb4640cbccc","Type":"ContainerDied","Data":"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f"} Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.358504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" event={"ID":"d746febc-7247-498c-86b1-8cb4640cbccc","Type":"ContainerDied","Data":"0d1d7ce638171e21b5d2145ad95ed9bada1b340e3e1ab98a48601124634734bd"} Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.358270 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.371367 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.375426 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c6gg9"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.378925 4698 scope.go:117] "RemoveContainer" containerID="7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4" Oct 14 10:09:25 crc kubenswrapper[4698]: E1014 10:09:25.381125 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4\": container with ID starting with 7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4 not found: ID does not exist" containerID="7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.381189 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4"} err="failed to get container status \"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4\": rpc error: code = NotFound desc = could not find container \"7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4\": container with ID starting with 7bd8994f5dd54ce86aa00291175896739442fe60c5dc447ef4db8b66329803e4 not found: ID does not exist" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.381217 4698 scope.go:117] "RemoveContainer" containerID="313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.392543 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.396992 4698 scope.go:117] "RemoveContainer" containerID="313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f" Oct 14 10:09:25 crc kubenswrapper[4698]: E1014 10:09:25.397478 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f\": container with ID starting with 313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f not found: ID does not exist" containerID="313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.397513 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f"} err="failed to get container status \"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f\": rpc error: code = NotFound desc = could not find container \"313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f\": container with ID starting with 313f17ca104537269f10049ad72f0650b99b8f57676480653278000c496be49f not found: ID does not exist" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.400472 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lbk6k"] Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.414659 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-client-ca\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.414736 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83cb743b-1714-46a7-87a1-d8114937e07f-serving-cert\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.414852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-client-ca\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.414898 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-proxy-ca-bundles\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.414981 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-config\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.415046 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429z8\" (UniqueName: \"kubernetes.io/projected/83cb743b-1714-46a7-87a1-d8114937e07f-kube-api-access-429z8\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.415075 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-config\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.415098 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36fafc2-364a-409c-bd84-a2ec93a678b9-serving-cert\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.415168 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsp6r\" (UniqueName: \"kubernetes.io/projected/e36fafc2-364a-409c-bd84-a2ec93a678b9-kube-api-access-rsp6r\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.516748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-client-ca\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.516889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-proxy-ca-bundles\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.516959 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-config\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517013 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429z8\" (UniqueName: \"kubernetes.io/projected/83cb743b-1714-46a7-87a1-d8114937e07f-kube-api-access-429z8\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517049 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-config\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36fafc2-364a-409c-bd84-a2ec93a678b9-serving-cert\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsp6r\" (UniqueName: \"kubernetes.io/projected/e36fafc2-364a-409c-bd84-a2ec93a678b9-kube-api-access-rsp6r\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517189 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-client-ca\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.517236 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83cb743b-1714-46a7-87a1-d8114937e07f-serving-cert\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.518718 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-proxy-ca-bundles\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.519017 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-client-ca\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.519130 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cb743b-1714-46a7-87a1-d8114937e07f-config\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.519696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-client-ca\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.520019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36fafc2-364a-409c-bd84-a2ec93a678b9-config\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.524121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83cb743b-1714-46a7-87a1-d8114937e07f-serving-cert\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.524596 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36fafc2-364a-409c-bd84-a2ec93a678b9-serving-cert\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.543939 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsp6r\" (UniqueName: \"kubernetes.io/projected/e36fafc2-364a-409c-bd84-a2ec93a678b9-kube-api-access-rsp6r\") pod \"controller-manager-7c46ffdd96-m552j\" (UID: \"e36fafc2-364a-409c-bd84-a2ec93a678b9\") " pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.547330 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429z8\" (UniqueName: \"kubernetes.io/projected/83cb743b-1714-46a7-87a1-d8114937e07f-kube-api-access-429z8\") pod \"route-controller-manager-7db8fc5bf8-mzr8d\" (UID: \"83cb743b-1714-46a7-87a1-d8114937e07f\") " pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.579835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.591674 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.846042 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c46ffdd96-m552j"] Oct 14 10:09:25 crc kubenswrapper[4698]: W1014 10:09:25.863371 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36fafc2_364a_409c_bd84_a2ec93a678b9.slice/crio-24d9e58e3bb72651765409a4d9aeb2ccef25d676248c7c5c8509fd261f17ebd4 WatchSource:0}: Error finding container 24d9e58e3bb72651765409a4d9aeb2ccef25d676248c7c5c8509fd261f17ebd4: Status 404 returned error can't find the container with id 24d9e58e3bb72651765409a4d9aeb2ccef25d676248c7c5c8509fd261f17ebd4 Oct 14 10:09:25 crc kubenswrapper[4698]: I1014 10:09:25.905625 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d"] Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.369812 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" event={"ID":"e36fafc2-364a-409c-bd84-a2ec93a678b9","Type":"ContainerStarted","Data":"c9dffab84b9e150687dd74a189a943f2ad073b14660e25403925b8757ffea206"} Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.369876 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" event={"ID":"e36fafc2-364a-409c-bd84-a2ec93a678b9","Type":"ContainerStarted","Data":"24d9e58e3bb72651765409a4d9aeb2ccef25d676248c7c5c8509fd261f17ebd4"} Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.369904 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.373010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" event={"ID":"83cb743b-1714-46a7-87a1-d8114937e07f","Type":"ContainerStarted","Data":"7329eb1f572829719ad8b50ac075ae97400d1f773638d7cb823328ade26d03e1"} Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.373056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" event={"ID":"83cb743b-1714-46a7-87a1-d8114937e07f","Type":"ContainerStarted","Data":"cc2cc34ae3ecfb4686aaf186ada6d22f05859a486a987ff27f9356dacd33bf48"} Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.373839 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.381279 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.383248 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.425889 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c46ffdd96-m552j" podStartSLOduration=1.425867585 podStartE2EDuration="1.425867585s" podCreationTimestamp="2025-10-14 10:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:09:26.391012314 +0000 UTC m=+748.088311740" watchObservedRunningTime="2025-10-14 10:09:26.425867585 +0000 UTC m=+748.123167001" Oct 14 10:09:26 crc kubenswrapper[4698]: I1014 10:09:26.439271 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7db8fc5bf8-mzr8d" podStartSLOduration=1.439249929 podStartE2EDuration="1.439249929s" podCreationTimestamp="2025-10-14 10:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:09:26.438016363 +0000 UTC m=+748.135315789" watchObservedRunningTime="2025-10-14 10:09:26.439249929 +0000 UTC m=+748.136549375" Oct 14 10:09:27 crc kubenswrapper[4698]: I1014 10:09:27.023753 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e52335-6348-488a-a36a-8971b953737b" path="/var/lib/kubelet/pods/67e52335-6348-488a-a36a-8971b953737b/volumes" Oct 14 10:09:27 crc kubenswrapper[4698]: I1014 10:09:27.024907 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d746febc-7247-498c-86b1-8cb4640cbccc" path="/var/lib/kubelet/pods/d746febc-7247-498c-86b1-8cb4640cbccc/volumes" Oct 14 10:09:29 crc kubenswrapper[4698]: I1014 10:09:29.636815 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-8vhjp" Oct 14 10:09:30 crc kubenswrapper[4698]: I1014 10:09:30.608660 4698 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.303502 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md"] Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.306016 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.308697 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.326710 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md"] Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.428107 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.428220 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.428296 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfvg\" (UniqueName: \"kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.503689 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-f47kf" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerName="console" containerID="cri-o://1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e" gracePeriod=15 Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.529934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.530029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.530077 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfvg\" (UniqueName: \"kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.530890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.530962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.566577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfvg\" (UniqueName: \"kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.628287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.980728 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f47kf_abe6a35d-8cd2-4749-b9cf-8d11f6169470/console/0.log" Oct 14 10:09:45 crc kubenswrapper[4698]: I1014 10:09:45.980799 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.115364 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md"] Oct 14 10:09:46 crc kubenswrapper[4698]: W1014 10:09:46.121087 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfec6695b_3ca9_4ae5_83f8_23cf2289cb14.slice/crio-bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367 WatchSource:0}: Error finding container bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367: Status 404 returned error can't find the container with id bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367 Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.139266 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.139320 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.139376 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7rb\" (UniqueName: \"kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140196 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config" (OuterVolumeSpecName: "console-config") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140454 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140584 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140612 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert\") pod \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\" (UID: \"abe6a35d-8cd2-4749-b9cf-8d11f6169470\") " Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.140964 4698 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.141072 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.141985 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.143018 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca" (OuterVolumeSpecName: "service-ca") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.145232 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.145604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.147546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb" (OuterVolumeSpecName: "kube-api-access-7r7rb") pod "abe6a35d-8cd2-4749-b9cf-8d11f6169470" (UID: "abe6a35d-8cd2-4749-b9cf-8d11f6169470"). InnerVolumeSpecName "kube-api-access-7r7rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242250 4698 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242284 4698 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242296 4698 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242319 4698 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abe6a35d-8cd2-4749-b9cf-8d11f6169470-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242327 4698 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abe6a35d-8cd2-4749-b9cf-8d11f6169470-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.242337 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7rb\" (UniqueName: \"kubernetes.io/projected/abe6a35d-8cd2-4749-b9cf-8d11f6169470-kube-api-access-7r7rb\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511598 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f47kf_abe6a35d-8cd2-4749-b9cf-8d11f6169470/console/0.log" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511691 4698 generic.go:334] "Generic (PLEG): container finished" podID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerID="1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e" exitCode=2 Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511752 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f47kf" event={"ID":"abe6a35d-8cd2-4749-b9cf-8d11f6169470","Type":"ContainerDied","Data":"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e"} Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511803 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f47kf" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f47kf" event={"ID":"abe6a35d-8cd2-4749-b9cf-8d11f6169470","Type":"ContainerDied","Data":"e2f8894e500e1913b997dda19bafa5f56d8dba598ca411c8f7235a7930cfee29"} Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.511873 4698 scope.go:117] "RemoveContainer" containerID="1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.516710 4698 generic.go:334] "Generic (PLEG): container finished" podID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerID="221574813ac547866e6458e1a3a333d6e68ee2378790a5f44871494f999de8a9" exitCode=0 Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.517052 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" event={"ID":"fec6695b-3ca9-4ae5-83f8-23cf2289cb14","Type":"ContainerDied","Data":"221574813ac547866e6458e1a3a333d6e68ee2378790a5f44871494f999de8a9"} Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.517163 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" event={"ID":"fec6695b-3ca9-4ae5-83f8-23cf2289cb14","Type":"ContainerStarted","Data":"bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367"} Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.541162 4698 scope.go:117] "RemoveContainer" containerID="1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e" Oct 14 10:09:46 crc kubenswrapper[4698]: E1014 10:09:46.545653 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e\": container with ID starting with 1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e not found: ID does not exist" containerID="1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.545756 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e"} err="failed to get container status \"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e\": rpc error: code = NotFound desc = could not find container \"1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e\": container with ID starting with 1d0b4bddb1b33c273883cb204c8a2334846f95248f79d824da37449983e6897e not found: ID does not exist" Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.564606 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 10:09:46 crc kubenswrapper[4698]: I1014 10:09:46.568650 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-f47kf"] Oct 14 10:09:47 crc kubenswrapper[4698]: I1014 10:09:47.028219 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" path="/var/lib/kubelet/pods/abe6a35d-8cd2-4749-b9cf-8d11f6169470/volumes" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.540869 4698 generic.go:334] "Generic (PLEG): container finished" podID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerID="a7ae5c412efea26ac84da451cf5ece6fb6f0ed823a7d341b9bc4d5a303308273" exitCode=0 Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.541093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" event={"ID":"fec6695b-3ca9-4ae5-83f8-23cf2289cb14","Type":"ContainerDied","Data":"a7ae5c412efea26ac84da451cf5ece6fb6f0ed823a7d341b9bc4d5a303308273"} Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.644345 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:09:48 crc kubenswrapper[4698]: E1014 10:09:48.644653 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerName="console" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.644671 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerName="console" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.644906 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe6a35d-8cd2-4749-b9cf-8d11f6169470" containerName="console" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.646242 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.675382 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.781830 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.781995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5l4\" (UniqueName: \"kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.782038 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.883220 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.883369 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5l4\" (UniqueName: \"kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.883409 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.883855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.884168 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.909427 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5l4\" (UniqueName: \"kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4\") pod \"redhat-operators-xzg99\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:48 crc kubenswrapper[4698]: I1014 10:09:48.995576 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:49 crc kubenswrapper[4698]: I1014 10:09:49.432265 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:09:49 crc kubenswrapper[4698]: W1014 10:09:49.444348 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48bd7a4f_246a_4220_b711_7f96af3893d0.slice/crio-050f47988d819bc232dc7cd86f1b10131c94af4c56dc0707d86969a307e50e33 WatchSource:0}: Error finding container 050f47988d819bc232dc7cd86f1b10131c94af4c56dc0707d86969a307e50e33: Status 404 returned error can't find the container with id 050f47988d819bc232dc7cd86f1b10131c94af4c56dc0707d86969a307e50e33 Oct 14 10:09:49 crc kubenswrapper[4698]: I1014 10:09:49.549006 4698 generic.go:334] "Generic (PLEG): container finished" podID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerID="2ff1e2ac23a25e4d48fd3010f7b97ad13d5be33d89e2638364cea9519d6d2e45" exitCode=0 Oct 14 10:09:49 crc kubenswrapper[4698]: I1014 10:09:49.549109 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" event={"ID":"fec6695b-3ca9-4ae5-83f8-23cf2289cb14","Type":"ContainerDied","Data":"2ff1e2ac23a25e4d48fd3010f7b97ad13d5be33d89e2638364cea9519d6d2e45"} Oct 14 10:09:49 crc kubenswrapper[4698]: I1014 10:09:49.550502 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerStarted","Data":"050f47988d819bc232dc7cd86f1b10131c94af4c56dc0707d86969a307e50e33"} Oct 14 10:09:50 crc kubenswrapper[4698]: I1014 10:09:50.562302 4698 generic.go:334] "Generic (PLEG): container finished" podID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerID="3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907" exitCode=0 Oct 14 10:09:50 crc kubenswrapper[4698]: I1014 10:09:50.562459 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerDied","Data":"3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907"} Oct 14 10:09:50 crc kubenswrapper[4698]: I1014 10:09:50.959970 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.116508 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle\") pod \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.116998 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfvg\" (UniqueName: \"kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg\") pod \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.117183 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util\") pod \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\" (UID: \"fec6695b-3ca9-4ae5-83f8-23cf2289cb14\") " Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.118058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle" (OuterVolumeSpecName: "bundle") pod "fec6695b-3ca9-4ae5-83f8-23cf2289cb14" (UID: "fec6695b-3ca9-4ae5-83f8-23cf2289cb14"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.125717 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg" (OuterVolumeSpecName: "kube-api-access-7lfvg") pod "fec6695b-3ca9-4ae5-83f8-23cf2289cb14" (UID: "fec6695b-3ca9-4ae5-83f8-23cf2289cb14"). InnerVolumeSpecName "kube-api-access-7lfvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.147756 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util" (OuterVolumeSpecName: "util") pod "fec6695b-3ca9-4ae5-83f8-23cf2289cb14" (UID: "fec6695b-3ca9-4ae5-83f8-23cf2289cb14"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.219755 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-util\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.219919 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.219942 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfvg\" (UniqueName: \"kubernetes.io/projected/fec6695b-3ca9-4ae5-83f8-23cf2289cb14-kube-api-access-7lfvg\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.576049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerStarted","Data":"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440"} Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.579815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" event={"ID":"fec6695b-3ca9-4ae5-83f8-23cf2289cb14","Type":"ContainerDied","Data":"bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367"} Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.579950 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd96465218bcf89762333ba80aa56470c5741035373c82f70876cc2a15c18367" Oct 14 10:09:51 crc kubenswrapper[4698]: I1014 10:09:51.580048 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md" Oct 14 10:09:52 crc kubenswrapper[4698]: I1014 10:09:52.586662 4698 generic.go:334] "Generic (PLEG): container finished" podID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerID="a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440" exitCode=0 Oct 14 10:09:52 crc kubenswrapper[4698]: I1014 10:09:52.586721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerDied","Data":"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440"} Oct 14 10:09:53 crc kubenswrapper[4698]: I1014 10:09:53.596301 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerStarted","Data":"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439"} Oct 14 10:09:53 crc kubenswrapper[4698]: I1014 10:09:53.622191 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xzg99" podStartSLOduration=3.156345286 podStartE2EDuration="5.622160533s" podCreationTimestamp="2025-10-14 10:09:48 +0000 UTC" firstStartedPulling="2025-10-14 10:09:50.564569142 +0000 UTC m=+772.261868588" lastFinishedPulling="2025-10-14 10:09:53.030384409 +0000 UTC m=+774.727683835" observedRunningTime="2025-10-14 10:09:53.614706719 +0000 UTC m=+775.312006185" watchObservedRunningTime="2025-10-14 10:09:53.622160533 +0000 UTC m=+775.319459989" Oct 14 10:09:53 crc kubenswrapper[4698]: I1014 10:09:53.908742 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:09:53 crc kubenswrapper[4698]: I1014 10:09:53.909056 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:09:58 crc kubenswrapper[4698]: I1014 10:09:58.996135 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:58 crc kubenswrapper[4698]: I1014 10:09:58.996909 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.087663 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.676514 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.981643 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w"] Oct 14 10:09:59 crc kubenswrapper[4698]: E1014 10:09:59.981935 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="pull" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.981952 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="pull" Oct 14 10:09:59 crc kubenswrapper[4698]: E1014 10:09:59.981967 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="extract" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.981975 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="extract" Oct 14 10:09:59 crc kubenswrapper[4698]: E1014 10:09:59.981992 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="util" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.982000 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="util" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.982143 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec6695b-3ca9-4ae5-83f8-23cf2289cb14" containerName="extract" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.982624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.984954 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.985559 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.985802 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.986135 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bq4zj" Oct 14 10:09:59 crc kubenswrapper[4698]: I1014 10:09:59.990167 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.001515 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w"] Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.149434 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-apiservice-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.149501 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-webhook-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.149522 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbxs\" (UniqueName: \"kubernetes.io/projected/7f8afa35-0e83-439b-80cb-31f3da9293de-kube-api-access-qvbxs\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.251325 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-apiservice-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.251379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-webhook-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.251414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbxs\" (UniqueName: \"kubernetes.io/projected/7f8afa35-0e83-439b-80cb-31f3da9293de-kube-api-access-qvbxs\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.259493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-webhook-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.260551 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f8afa35-0e83-439b-80cb-31f3da9293de-apiservice-cert\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.272121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbxs\" (UniqueName: \"kubernetes.io/projected/7f8afa35-0e83-439b-80cb-31f3da9293de-kube-api-access-qvbxs\") pod \"metallb-operator-controller-manager-7556747f48-jxr6w\" (UID: \"7f8afa35-0e83-439b-80cb-31f3da9293de\") " pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.298315 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.436682 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr"] Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.437364 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.439127 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7ms42" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.446506 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.448904 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.461072 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr"] Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.555209 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8lq\" (UniqueName: \"kubernetes.io/projected/2cc70ba0-d097-4987-b877-fc209e27f275-kube-api-access-6z8lq\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.555269 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-apiservice-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.555342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-webhook-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.656274 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-webhook-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.656624 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z8lq\" (UniqueName: \"kubernetes.io/projected/2cc70ba0-d097-4987-b877-fc209e27f275-kube-api-access-6z8lq\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.656656 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-apiservice-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.660220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-webhook-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.665997 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2cc70ba0-d097-4987-b877-fc209e27f275-apiservice-cert\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.680486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z8lq\" (UniqueName: \"kubernetes.io/projected/2cc70ba0-d097-4987-b877-fc209e27f275-kube-api-access-6z8lq\") pod \"metallb-operator-webhook-server-cd79cbbb8-dcbnr\" (UID: \"2cc70ba0-d097-4987-b877-fc209e27f275\") " pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.740743 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w"] Oct 14 10:10:00 crc kubenswrapper[4698]: W1014 10:10:00.752642 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f8afa35_0e83_439b_80cb_31f3da9293de.slice/crio-0ea94f3084da91e7a7c7146c55e82b8b8b2939f0f4f7932bc7be8844af72bbb0 WatchSource:0}: Error finding container 0ea94f3084da91e7a7c7146c55e82b8b8b2939f0f4f7932bc7be8844af72bbb0: Status 404 returned error can't find the container with id 0ea94f3084da91e7a7c7146c55e82b8b8b2939f0f4f7932bc7be8844af72bbb0 Oct 14 10:10:00 crc kubenswrapper[4698]: I1014 10:10:00.765332 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:01 crc kubenswrapper[4698]: W1014 10:10:01.258420 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc70ba0_d097_4987_b877_fc209e27f275.slice/crio-0ea86e93aa4bb5ccd91e89ea8c3595655cb65cad61545919aa51cec5739352ea WatchSource:0}: Error finding container 0ea86e93aa4bb5ccd91e89ea8c3595655cb65cad61545919aa51cec5739352ea: Status 404 returned error can't find the container with id 0ea86e93aa4bb5ccd91e89ea8c3595655cb65cad61545919aa51cec5739352ea Oct 14 10:10:01 crc kubenswrapper[4698]: I1014 10:10:01.259896 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr"] Oct 14 10:10:01 crc kubenswrapper[4698]: I1014 10:10:01.640135 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:10:01 crc kubenswrapper[4698]: I1014 10:10:01.646145 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" event={"ID":"2cc70ba0-d097-4987-b877-fc209e27f275","Type":"ContainerStarted","Data":"0ea86e93aa4bb5ccd91e89ea8c3595655cb65cad61545919aa51cec5739352ea"} Oct 14 10:10:01 crc kubenswrapper[4698]: I1014 10:10:01.647739 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" event={"ID":"7f8afa35-0e83-439b-80cb-31f3da9293de","Type":"ContainerStarted","Data":"0ea94f3084da91e7a7c7146c55e82b8b8b2939f0f4f7932bc7be8844af72bbb0"} Oct 14 10:10:01 crc kubenswrapper[4698]: I1014 10:10:01.647838 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xzg99" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="registry-server" containerID="cri-o://24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439" gracePeriod=2 Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.242538 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.415131 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content\") pod \"48bd7a4f-246a-4220-b711-7f96af3893d0\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.415203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities\") pod \"48bd7a4f-246a-4220-b711-7f96af3893d0\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.415298 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5l4\" (UniqueName: \"kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4\") pod \"48bd7a4f-246a-4220-b711-7f96af3893d0\" (UID: \"48bd7a4f-246a-4220-b711-7f96af3893d0\") " Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.417322 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities" (OuterVolumeSpecName: "utilities") pod "48bd7a4f-246a-4220-b711-7f96af3893d0" (UID: "48bd7a4f-246a-4220-b711-7f96af3893d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.422203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4" (OuterVolumeSpecName: "kube-api-access-pp5l4") pod "48bd7a4f-246a-4220-b711-7f96af3893d0" (UID: "48bd7a4f-246a-4220-b711-7f96af3893d0"). InnerVolumeSpecName "kube-api-access-pp5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.517435 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.517518 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5l4\" (UniqueName: \"kubernetes.io/projected/48bd7a4f-246a-4220-b711-7f96af3893d0-kube-api-access-pp5l4\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.655979 4698 generic.go:334] "Generic (PLEG): container finished" podID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerID="24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439" exitCode=0 Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.656014 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerDied","Data":"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439"} Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.656039 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzg99" event={"ID":"48bd7a4f-246a-4220-b711-7f96af3893d0","Type":"ContainerDied","Data":"050f47988d819bc232dc7cd86f1b10131c94af4c56dc0707d86969a307e50e33"} Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.656057 4698 scope.go:117] "RemoveContainer" containerID="24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.656101 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzg99" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.681420 4698 scope.go:117] "RemoveContainer" containerID="a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.700596 4698 scope.go:117] "RemoveContainer" containerID="3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.753452 4698 scope.go:117] "RemoveContainer" containerID="24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439" Oct 14 10:10:02 crc kubenswrapper[4698]: E1014 10:10:02.753901 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439\": container with ID starting with 24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439 not found: ID does not exist" containerID="24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.753952 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439"} err="failed to get container status \"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439\": rpc error: code = NotFound desc = could not find container \"24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439\": container with ID starting with 24b26f7ef8cba32a72fc57bde8fff6c05030201977908bf4e3c32f433cd1f439 not found: ID does not exist" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.753983 4698 scope.go:117] "RemoveContainer" containerID="a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440" Oct 14 10:10:02 crc kubenswrapper[4698]: E1014 10:10:02.754279 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440\": container with ID starting with a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440 not found: ID does not exist" containerID="a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.754317 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440"} err="failed to get container status \"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440\": rpc error: code = NotFound desc = could not find container \"a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440\": container with ID starting with a98f07192c9674835d0c6e907acb7d37973bd1f250acc808f33fba4b99af7440 not found: ID does not exist" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.754341 4698 scope.go:117] "RemoveContainer" containerID="3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907" Oct 14 10:10:02 crc kubenswrapper[4698]: E1014 10:10:02.754617 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907\": container with ID starting with 3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907 not found: ID does not exist" containerID="3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.754643 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907"} err="failed to get container status \"3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907\": rpc error: code = NotFound desc = could not find container \"3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907\": container with ID starting with 3e113d002fee21df9e6306d8f6b75f87084380338fdca67ee1e200bf5df00907 not found: ID does not exist" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.792134 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48bd7a4f-246a-4220-b711-7f96af3893d0" (UID: "48bd7a4f-246a-4220-b711-7f96af3893d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.837610 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bd7a4f-246a-4220-b711-7f96af3893d0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.990317 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:10:02 crc kubenswrapper[4698]: I1014 10:10:02.995235 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xzg99"] Oct 14 10:10:03 crc kubenswrapper[4698]: I1014 10:10:03.024693 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" path="/var/lib/kubelet/pods/48bd7a4f-246a-4220-b711-7f96af3893d0/volumes" Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.694428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" event={"ID":"7f8afa35-0e83-439b-80cb-31f3da9293de","Type":"ContainerStarted","Data":"542895ad4c972a5366f4787827352b4923ab597775fe2fbcf8e0b56214cadeda"} Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.695047 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.695665 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" event={"ID":"2cc70ba0-d097-4987-b877-fc209e27f275","Type":"ContainerStarted","Data":"4e665319351e0e3b408ddfb37eb5ecdfa0535d09d1fffe599a6c69f35b5f11dd"} Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.696004 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.716132 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" podStartSLOduration=2.393716809 podStartE2EDuration="7.716109779s" podCreationTimestamp="2025-10-14 10:09:59 +0000 UTC" firstStartedPulling="2025-10-14 10:10:00.758531751 +0000 UTC m=+782.455831167" lastFinishedPulling="2025-10-14 10:10:06.080924721 +0000 UTC m=+787.778224137" observedRunningTime="2025-10-14 10:10:06.712506827 +0000 UTC m=+788.409806263" watchObservedRunningTime="2025-10-14 10:10:06.716109779 +0000 UTC m=+788.413409195" Oct 14 10:10:06 crc kubenswrapper[4698]: I1014 10:10:06.737151 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" podStartSLOduration=1.8942289749999999 podStartE2EDuration="6.737133716s" podCreationTimestamp="2025-10-14 10:10:00 +0000 UTC" firstStartedPulling="2025-10-14 10:10:01.261291391 +0000 UTC m=+782.958590807" lastFinishedPulling="2025-10-14 10:10:06.104196132 +0000 UTC m=+787.801495548" observedRunningTime="2025-10-14 10:10:06.735680355 +0000 UTC m=+788.432979791" watchObservedRunningTime="2025-10-14 10:10:06.737133716 +0000 UTC m=+788.434433132" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.559250 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:18 crc kubenswrapper[4698]: E1014 10:10:18.559979 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="extract-utilities" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.559994 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="extract-utilities" Oct 14 10:10:18 crc kubenswrapper[4698]: E1014 10:10:18.560032 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="extract-content" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.560040 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="extract-content" Oct 14 10:10:18 crc kubenswrapper[4698]: E1014 10:10:18.560067 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="registry-server" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.560076 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="registry-server" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.560203 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bd7a4f-246a-4220-b711-7f96af3893d0" containerName="registry-server" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.561153 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.577458 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.677537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.677630 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.677684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtvw\" (UniqueName: \"kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.779337 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.779434 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.779558 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtvw\" (UniqueName: \"kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.780178 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.780195 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.801429 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtvw\" (UniqueName: \"kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw\") pod \"redhat-marketplace-qsdpf\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:18 crc kubenswrapper[4698]: I1014 10:10:18.879421 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:19 crc kubenswrapper[4698]: I1014 10:10:19.291321 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:19 crc kubenswrapper[4698]: I1014 10:10:19.790408 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerID="495a2df4ac86a823f939e74efd6913693b4ddff56be94ac31502c73757532440" exitCode=0 Oct 14 10:10:19 crc kubenswrapper[4698]: I1014 10:10:19.790502 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerDied","Data":"495a2df4ac86a823f939e74efd6913693b4ddff56be94ac31502c73757532440"} Oct 14 10:10:19 crc kubenswrapper[4698]: I1014 10:10:19.790571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerStarted","Data":"fd368bdc9f7ef00519913e047b9177f2ed629469dbcb0539d1499111fb2212f4"} Oct 14 10:10:20 crc kubenswrapper[4698]: I1014 10:10:20.771579 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cd79cbbb8-dcbnr" Oct 14 10:10:20 crc kubenswrapper[4698]: I1014 10:10:20.804502 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerID="66b361a492e50fe5137e6fbe01d0f77658c4d1f6edc5643eb30afa2c8d8ceb40" exitCode=0 Oct 14 10:10:20 crc kubenswrapper[4698]: I1014 10:10:20.804548 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerDied","Data":"66b361a492e50fe5137e6fbe01d0f77658c4d1f6edc5643eb30afa2c8d8ceb40"} Oct 14 10:10:21 crc kubenswrapper[4698]: I1014 10:10:21.811850 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerStarted","Data":"1176a78eb1aefbc21783798976afb899571871fa6ca76a76425b9d9a0ee2b12d"} Oct 14 10:10:21 crc kubenswrapper[4698]: I1014 10:10:21.830812 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsdpf" podStartSLOduration=2.156115078 podStartE2EDuration="3.830795422s" podCreationTimestamp="2025-10-14 10:10:18 +0000 UTC" firstStartedPulling="2025-10-14 10:10:19.792319252 +0000 UTC m=+801.489618668" lastFinishedPulling="2025-10-14 10:10:21.466999596 +0000 UTC m=+803.164299012" observedRunningTime="2025-10-14 10:10:21.829389512 +0000 UTC m=+803.526688938" watchObservedRunningTime="2025-10-14 10:10:21.830795422 +0000 UTC m=+803.528094848" Oct 14 10:10:23 crc kubenswrapper[4698]: I1014 10:10:23.908288 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:10:23 crc kubenswrapper[4698]: I1014 10:10:23.908417 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:10:25 crc kubenswrapper[4698]: I1014 10:10:25.959939 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:25 crc kubenswrapper[4698]: I1014 10:10:25.962233 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:25 crc kubenswrapper[4698]: I1014 10:10:25.976856 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.084140 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbv7f\" (UniqueName: \"kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.084316 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.084388 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.186210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.186399 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbv7f\" (UniqueName: \"kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.186517 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.187046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.187264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.214957 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbv7f\" (UniqueName: \"kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f\") pod \"certified-operators-qt2tw\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.293086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.768098 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:26 crc kubenswrapper[4698]: W1014 10:10:26.776800 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4b245c_9153_4628_8669_1963711fb65a.slice/crio-ee176fbac0fff8933de4f748c393886a06b70165b637c4631142ca820a51cfa4 WatchSource:0}: Error finding container ee176fbac0fff8933de4f748c393886a06b70165b637c4631142ca820a51cfa4: Status 404 returned error can't find the container with id ee176fbac0fff8933de4f748c393886a06b70165b637c4631142ca820a51cfa4 Oct 14 10:10:26 crc kubenswrapper[4698]: I1014 10:10:26.848753 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerStarted","Data":"ee176fbac0fff8933de4f748c393886a06b70165b637c4631142ca820a51cfa4"} Oct 14 10:10:27 crc kubenswrapper[4698]: I1014 10:10:27.858144 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc4b245c-9153-4628-8669-1963711fb65a" containerID="7bdf53a3a5f91ae9f848a0688b99e85d6d13c3f23355496157abcbeb29fe11e5" exitCode=0 Oct 14 10:10:27 crc kubenswrapper[4698]: I1014 10:10:27.858188 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerDied","Data":"7bdf53a3a5f91ae9f848a0688b99e85d6d13c3f23355496157abcbeb29fe11e5"} Oct 14 10:10:28 crc kubenswrapper[4698]: I1014 10:10:28.866353 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerStarted","Data":"4e7e433a55eb3d2a954ebf9efb7b5eb48d5b394d9423517c02ec6c35cd504947"} Oct 14 10:10:28 crc kubenswrapper[4698]: I1014 10:10:28.880201 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:28 crc kubenswrapper[4698]: I1014 10:10:28.880254 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:28 crc kubenswrapper[4698]: I1014 10:10:28.929074 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:29 crc kubenswrapper[4698]: I1014 10:10:29.875472 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc4b245c-9153-4628-8669-1963711fb65a" containerID="4e7e433a55eb3d2a954ebf9efb7b5eb48d5b394d9423517c02ec6c35cd504947" exitCode=0 Oct 14 10:10:29 crc kubenswrapper[4698]: I1014 10:10:29.875606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerDied","Data":"4e7e433a55eb3d2a954ebf9efb7b5eb48d5b394d9423517c02ec6c35cd504947"} Oct 14 10:10:29 crc kubenswrapper[4698]: I1014 10:10:29.945930 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:30 crc kubenswrapper[4698]: I1014 10:10:30.887149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerStarted","Data":"a99b32136a36fd9e624b5d36ede01f2348c0c97c42db0918460d750684c6eee7"} Oct 14 10:10:30 crc kubenswrapper[4698]: I1014 10:10:30.932332 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qt2tw" podStartSLOduration=3.185743289 podStartE2EDuration="5.932310448s" podCreationTimestamp="2025-10-14 10:10:25 +0000 UTC" firstStartedPulling="2025-10-14 10:10:27.860908389 +0000 UTC m=+809.558207815" lastFinishedPulling="2025-10-14 10:10:30.607475518 +0000 UTC m=+812.304774974" observedRunningTime="2025-10-14 10:10:30.925589168 +0000 UTC m=+812.622888654" watchObservedRunningTime="2025-10-14 10:10:30.932310448 +0000 UTC m=+812.629609884" Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.548248 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.549542 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsdpf" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="registry-server" containerID="cri-o://1176a78eb1aefbc21783798976afb899571871fa6ca76a76425b9d9a0ee2b12d" gracePeriod=2 Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.904453 4698 generic.go:334] "Generic (PLEG): container finished" podID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerID="1176a78eb1aefbc21783798976afb899571871fa6ca76a76425b9d9a0ee2b12d" exitCode=0 Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.904738 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerDied","Data":"1176a78eb1aefbc21783798976afb899571871fa6ca76a76425b9d9a0ee2b12d"} Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.956868 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.990977 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mtvw\" (UniqueName: \"kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw\") pod \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.991030 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities\") pod \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.991169 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content\") pod \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\" (UID: \"f9ffce3f-2c13-4142-8063-5c60b86adeb1\") " Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.992042 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities" (OuterVolumeSpecName: "utilities") pod "f9ffce3f-2c13-4142-8063-5c60b86adeb1" (UID: "f9ffce3f-2c13-4142-8063-5c60b86adeb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.992195 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:32 crc kubenswrapper[4698]: I1014 10:10:32.997504 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw" (OuterVolumeSpecName: "kube-api-access-8mtvw") pod "f9ffce3f-2c13-4142-8063-5c60b86adeb1" (UID: "f9ffce3f-2c13-4142-8063-5c60b86adeb1"). InnerVolumeSpecName "kube-api-access-8mtvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.006759 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ffce3f-2c13-4142-8063-5c60b86adeb1" (UID: "f9ffce3f-2c13-4142-8063-5c60b86adeb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.095150 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mtvw\" (UniqueName: \"kubernetes.io/projected/f9ffce3f-2c13-4142-8063-5c60b86adeb1-kube-api-access-8mtvw\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.095183 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ffce3f-2c13-4142-8063-5c60b86adeb1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.913613 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsdpf" event={"ID":"f9ffce3f-2c13-4142-8063-5c60b86adeb1","Type":"ContainerDied","Data":"fd368bdc9f7ef00519913e047b9177f2ed629469dbcb0539d1499111fb2212f4"} Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.913665 4698 scope.go:117] "RemoveContainer" containerID="1176a78eb1aefbc21783798976afb899571871fa6ca76a76425b9d9a0ee2b12d" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.913724 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsdpf" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.936330 4698 scope.go:117] "RemoveContainer" containerID="66b361a492e50fe5137e6fbe01d0f77658c4d1f6edc5643eb30afa2c8d8ceb40" Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.948844 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.953390 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsdpf"] Oct 14 10:10:33 crc kubenswrapper[4698]: I1014 10:10:33.967101 4698 scope.go:117] "RemoveContainer" containerID="495a2df4ac86a823f939e74efd6913693b4ddff56be94ac31502c73757532440" Oct 14 10:10:35 crc kubenswrapper[4698]: I1014 10:10:35.028863 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" path="/var/lib/kubelet/pods/f9ffce3f-2c13-4142-8063-5c60b86adeb1/volumes" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.181586 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:36 crc kubenswrapper[4698]: E1014 10:10:36.183477 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="extract-utilities" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.183519 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="extract-utilities" Oct 14 10:10:36 crc kubenswrapper[4698]: E1014 10:10:36.183558 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="extract-content" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.183571 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="extract-content" Oct 14 10:10:36 crc kubenswrapper[4698]: E1014 10:10:36.183599 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="registry-server" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.183617 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="registry-server" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.184101 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ffce3f-2c13-4142-8063-5c60b86adeb1" containerName="registry-server" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.191842 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.207601 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.294012 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.294069 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.336776 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.337192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.337281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbcb\" (UniqueName: \"kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.353926 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.438913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbcb\" (UniqueName: \"kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.439038 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.439074 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.439998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.440322 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.460909 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbcb\" (UniqueName: \"kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb\") pod \"community-operators-jtjrp\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.527901 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.992189 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:36 crc kubenswrapper[4698]: I1014 10:10:36.996853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:37 crc kubenswrapper[4698]: I1014 10:10:37.945230 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerID="a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43" exitCode=0 Oct 14 10:10:37 crc kubenswrapper[4698]: I1014 10:10:37.945344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerDied","Data":"a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43"} Oct 14 10:10:37 crc kubenswrapper[4698]: I1014 10:10:37.945902 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerStarted","Data":"12990a0755adaa7e4d7afeb3504b30bedc90e17cdcb3e0752231d4a62139c4e1"} Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.744431 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.745201 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qt2tw" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="registry-server" containerID="cri-o://a99b32136a36fd9e624b5d36ede01f2348c0c97c42db0918460d750684c6eee7" gracePeriod=2 Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.966924 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc4b245c-9153-4628-8669-1963711fb65a" containerID="a99b32136a36fd9e624b5d36ede01f2348c0c97c42db0918460d750684c6eee7" exitCode=0 Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.966985 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerDied","Data":"a99b32136a36fd9e624b5d36ede01f2348c0c97c42db0918460d750684c6eee7"} Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.969342 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerID="5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1" exitCode=0 Oct 14 10:10:39 crc kubenswrapper[4698]: I1014 10:10:39.969377 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerDied","Data":"5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1"} Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.253929 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.301488 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7556747f48-jxr6w" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.398221 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities\") pod \"fc4b245c-9153-4628-8669-1963711fb65a\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.398284 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content\") pod \"fc4b245c-9153-4628-8669-1963711fb65a\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.398400 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbv7f\" (UniqueName: \"kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f\") pod \"fc4b245c-9153-4628-8669-1963711fb65a\" (UID: \"fc4b245c-9153-4628-8669-1963711fb65a\") " Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.398907 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities" (OuterVolumeSpecName: "utilities") pod "fc4b245c-9153-4628-8669-1963711fb65a" (UID: "fc4b245c-9153-4628-8669-1963711fb65a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.410849 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f" (OuterVolumeSpecName: "kube-api-access-cbv7f") pod "fc4b245c-9153-4628-8669-1963711fb65a" (UID: "fc4b245c-9153-4628-8669-1963711fb65a"). InnerVolumeSpecName "kube-api-access-cbv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.479065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc4b245c-9153-4628-8669-1963711fb65a" (UID: "fc4b245c-9153-4628-8669-1963711fb65a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.500594 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.500748 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4b245c-9153-4628-8669-1963711fb65a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.500813 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbv7f\" (UniqueName: \"kubernetes.io/projected/fc4b245c-9153-4628-8669-1963711fb65a-kube-api-access-cbv7f\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.984173 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qt2tw" event={"ID":"fc4b245c-9153-4628-8669-1963711fb65a","Type":"ContainerDied","Data":"ee176fbac0fff8933de4f748c393886a06b70165b637c4631142ca820a51cfa4"} Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.984258 4698 scope.go:117] "RemoveContainer" containerID="a99b32136a36fd9e624b5d36ede01f2348c0c97c42db0918460d750684c6eee7" Oct 14 10:10:40 crc kubenswrapper[4698]: I1014 10:10:40.985013 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qt2tw" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.011319 4698 scope.go:117] "RemoveContainer" containerID="4e7e433a55eb3d2a954ebf9efb7b5eb48d5b394d9423517c02ec6c35cd504947" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.052283 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.055068 4698 scope.go:117] "RemoveContainer" containerID="7bdf53a3a5f91ae9f848a0688b99e85d6d13c3f23355496157abcbeb29fe11e5" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.064882 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qt2tw"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.216820 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wpbs6"] Oct 14 10:10:41 crc kubenswrapper[4698]: E1014 10:10:41.217348 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="extract-content" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.217370 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="extract-content" Oct 14 10:10:41 crc kubenswrapper[4698]: E1014 10:10:41.217390 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="registry-server" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.217399 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="registry-server" Oct 14 10:10:41 crc kubenswrapper[4698]: E1014 10:10:41.217445 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="extract-utilities" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.217454 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="extract-utilities" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.217599 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc4b245c-9153-4628-8669-1963711fb65a" containerName="registry-server" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.219999 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.220024 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.221191 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.224173 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.224427 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.224554 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.224691 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z5ccz" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.239842 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.322686 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-847mc"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.323752 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.325508 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.325590 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q2tmk" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.327520 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.327624 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.339374 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-4rhbc"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.340507 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.342581 4698 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.353053 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-4rhbc"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414116 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-255xp\" (UniqueName: \"kubernetes.io/projected/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-kube-api-access-255xp\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-conf\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414198 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414222 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-startup\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414245 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-sockets\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-reloader\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414291 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8xr\" (UniqueName: \"kubernetes.io/projected/371eed8f-9f1c-4114-98c6-33c8abf3fa23-kube-api-access-zr8xr\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.414324 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics-certs\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.515755 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6mx\" (UniqueName: \"kubernetes.io/projected/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-kube-api-access-kv6mx\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.515850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8xr\" (UniqueName: \"kubernetes.io/projected/371eed8f-9f1c-4114-98c6-33c8abf3fa23-kube-api-access-zr8xr\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.515889 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-cert\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.515914 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.515942 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics-certs\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w27dk\" (UniqueName: \"kubernetes.io/projected/e165bb03-3546-4ae5-8c3c-5605cae81371-kube-api-access-w27dk\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-255xp\" (UniqueName: \"kubernetes.io/projected/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-kube-api-access-255xp\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516441 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-conf\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516475 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516579 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-startup\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metrics-certs\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-sockets\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516830 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-reloader\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-metrics-certs\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.516959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metallb-excludel2\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.517020 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.517032 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-conf\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.517149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-sockets\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.517265 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/371eed8f-9f1c-4114-98c6-33c8abf3fa23-reloader\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.517795 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/371eed8f-9f1c-4114-98c6-33c8abf3fa23-frr-startup\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.520297 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/371eed8f-9f1c-4114-98c6-33c8abf3fa23-metrics-certs\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.539013 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.542532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8xr\" (UniqueName: \"kubernetes.io/projected/371eed8f-9f1c-4114-98c6-33c8abf3fa23-kube-api-access-zr8xr\") pod \"frr-k8s-wpbs6\" (UID: \"371eed8f-9f1c-4114-98c6-33c8abf3fa23\") " pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.551549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-255xp\" (UniqueName: \"kubernetes.io/projected/db7dd36b-e7d3-4eed-b55f-cc3316be8e85-kube-api-access-255xp\") pod \"frr-k8s-webhook-server-64bf5d555-wqqdm\" (UID: \"db7dd36b-e7d3-4eed-b55f-cc3316be8e85\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.559701 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.571545 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.617885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-metrics-certs\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.617926 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metallb-excludel2\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.617948 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6mx\" (UniqueName: \"kubernetes.io/projected/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-kube-api-access-kv6mx\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.617967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-cert\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.617996 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w27dk\" (UniqueName: \"kubernetes.io/projected/e165bb03-3546-4ae5-8c3c-5605cae81371-kube-api-access-w27dk\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.618037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.618061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metrics-certs\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: E1014 10:10:41.618464 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 10:10:41 crc kubenswrapper[4698]: E1014 10:10:41.618557 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist podName:7ab10af0-2cb8-4ff4-bb4c-a186a319ce37 nodeName:}" failed. No retries permitted until 2025-10-14 10:10:42.118534994 +0000 UTC m=+823.815834410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist") pod "speaker-847mc" (UID: "7ab10af0-2cb8-4ff4-bb4c-a186a319ce37") : secret "metallb-memberlist" not found Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.619128 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metallb-excludel2\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.623873 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-cert\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.626691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-metrics-certs\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.635068 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e165bb03-3546-4ae5-8c3c-5605cae81371-metrics-certs\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.643193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6mx\" (UniqueName: \"kubernetes.io/projected/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-kube-api-access-kv6mx\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.648933 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w27dk\" (UniqueName: \"kubernetes.io/projected/e165bb03-3546-4ae5-8c3c-5605cae81371-kube-api-access-w27dk\") pod \"controller-68d546b9d8-4rhbc\" (UID: \"e165bb03-3546-4ae5-8c3c-5605cae81371\") " pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.653819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.991483 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"236388eb83c349454c873711153be525a17f0449bc3540abada3f3607a3210ac"} Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.998288 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm"] Oct 14 10:10:41 crc kubenswrapper[4698]: I1014 10:10:41.998908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerStarted","Data":"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9"} Oct 14 10:10:42 crc kubenswrapper[4698]: I1014 10:10:42.015877 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jtjrp" podStartSLOduration=2.909033689 podStartE2EDuration="6.015838801s" podCreationTimestamp="2025-10-14 10:10:36 +0000 UTC" firstStartedPulling="2025-10-14 10:10:37.948432323 +0000 UTC m=+819.645731779" lastFinishedPulling="2025-10-14 10:10:41.055237455 +0000 UTC m=+822.752536891" observedRunningTime="2025-10-14 10:10:42.015018818 +0000 UTC m=+823.712318254" watchObservedRunningTime="2025-10-14 10:10:42.015838801 +0000 UTC m=+823.713138217" Oct 14 10:10:42 crc kubenswrapper[4698]: I1014 10:10:42.124891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:42 crc kubenswrapper[4698]: E1014 10:10:42.125852 4698 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 10:10:42 crc kubenswrapper[4698]: E1014 10:10:42.125923 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist podName:7ab10af0-2cb8-4ff4-bb4c-a186a319ce37 nodeName:}" failed. No retries permitted until 2025-10-14 10:10:43.125903324 +0000 UTC m=+824.823202850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist") pod "speaker-847mc" (UID: "7ab10af0-2cb8-4ff4-bb4c-a186a319ce37") : secret "metallb-memberlist" not found Oct 14 10:10:42 crc kubenswrapper[4698]: I1014 10:10:42.142129 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-4rhbc"] Oct 14 10:10:42 crc kubenswrapper[4698]: W1014 10:10:42.145156 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode165bb03_3546_4ae5_8c3c_5605cae81371.slice/crio-ca631fa8a4dfadbf5564278bdc1cf76d31087224c52053cc94963b5e66fab81b WatchSource:0}: Error finding container ca631fa8a4dfadbf5564278bdc1cf76d31087224c52053cc94963b5e66fab81b: Status 404 returned error can't find the container with id ca631fa8a4dfadbf5564278bdc1cf76d31087224c52053cc94963b5e66fab81b Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.003755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" event={"ID":"db7dd36b-e7d3-4eed-b55f-cc3316be8e85","Type":"ContainerStarted","Data":"5c1edb1ff6db70f1dbc60afeaeccdae4bb8681b8b447b6aed34f3394b88d98ee"} Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.006149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-4rhbc" event={"ID":"e165bb03-3546-4ae5-8c3c-5605cae81371","Type":"ContainerStarted","Data":"f0ca86499985bc5bccf059f3d2234129433a4848ff934079c9fa0dba8afbd7c6"} Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.006224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-4rhbc" event={"ID":"e165bb03-3546-4ae5-8c3c-5605cae81371","Type":"ContainerStarted","Data":"80901cd935da44b132c05974791a43afc4403dcdd075c1dc1c36052999b9fc68"} Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.006244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-4rhbc" event={"ID":"e165bb03-3546-4ae5-8c3c-5605cae81371","Type":"ContainerStarted","Data":"ca631fa8a4dfadbf5564278bdc1cf76d31087224c52053cc94963b5e66fab81b"} Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.006427 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.025952 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc4b245c-9153-4628-8669-1963711fb65a" path="/var/lib/kubelet/pods/fc4b245c-9153-4628-8669-1963711fb65a/volumes" Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.141866 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.149193 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7ab10af0-2cb8-4ff4-bb4c-a186a319ce37-memberlist\") pod \"speaker-847mc\" (UID: \"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37\") " pod="metallb-system/speaker-847mc" Oct 14 10:10:43 crc kubenswrapper[4698]: I1014 10:10:43.440342 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-847mc" Oct 14 10:10:43 crc kubenswrapper[4698]: W1014 10:10:43.467849 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab10af0_2cb8_4ff4_bb4c_a186a319ce37.slice/crio-3d2393214125a28e24eae1dcd58e82adca6ec86528e96f2f11910df43ec250c9 WatchSource:0}: Error finding container 3d2393214125a28e24eae1dcd58e82adca6ec86528e96f2f11910df43ec250c9: Status 404 returned error can't find the container with id 3d2393214125a28e24eae1dcd58e82adca6ec86528e96f2f11910df43ec250c9 Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.013206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-847mc" event={"ID":"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37","Type":"ContainerStarted","Data":"85b27aa341c61b333a1c92f0214e696882e20ef2c6515d63a6643353c8986adc"} Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.013703 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-847mc" event={"ID":"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37","Type":"ContainerStarted","Data":"53a9da344d671e67a0b9fb75ef08f1894972a43bf9ef443e1a69fe89fb7906ba"} Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.013721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-847mc" event={"ID":"7ab10af0-2cb8-4ff4-bb4c-a186a319ce37","Type":"ContainerStarted","Data":"3d2393214125a28e24eae1dcd58e82adca6ec86528e96f2f11910df43ec250c9"} Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.013962 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-847mc" Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.029669 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-847mc" podStartSLOduration=3.029630321 podStartE2EDuration="3.029630321s" podCreationTimestamp="2025-10-14 10:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:10:44.027239973 +0000 UTC m=+825.724539399" watchObservedRunningTime="2025-10-14 10:10:44.029630321 +0000 UTC m=+825.726929757" Oct 14 10:10:44 crc kubenswrapper[4698]: I1014 10:10:44.032929 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-4rhbc" podStartSLOduration=3.032920724 podStartE2EDuration="3.032920724s" podCreationTimestamp="2025-10-14 10:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:10:43.031244372 +0000 UTC m=+824.728543788" watchObservedRunningTime="2025-10-14 10:10:44.032920724 +0000 UTC m=+825.730220130" Oct 14 10:10:46 crc kubenswrapper[4698]: I1014 10:10:46.528636 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:46 crc kubenswrapper[4698]: I1014 10:10:46.529084 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:46 crc kubenswrapper[4698]: I1014 10:10:46.581187 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:47 crc kubenswrapper[4698]: I1014 10:10:47.109969 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:48 crc kubenswrapper[4698]: I1014 10:10:48.945453 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.081302 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jtjrp" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="registry-server" containerID="cri-o://7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9" gracePeriod=2 Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.876685 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.952041 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities\") pod \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.952113 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tbcb\" (UniqueName: \"kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb\") pod \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.952182 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content\") pod \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\" (UID: \"bc658535-dcf2-49f1-8646-b4cd9eb01b17\") " Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.954617 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities" (OuterVolumeSpecName: "utilities") pod "bc658535-dcf2-49f1-8646-b4cd9eb01b17" (UID: "bc658535-dcf2-49f1-8646-b4cd9eb01b17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:49 crc kubenswrapper[4698]: I1014 10:10:49.962547 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb" (OuterVolumeSpecName: "kube-api-access-6tbcb") pod "bc658535-dcf2-49f1-8646-b4cd9eb01b17" (UID: "bc658535-dcf2-49f1-8646-b4cd9eb01b17"). InnerVolumeSpecName "kube-api-access-6tbcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.020319 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc658535-dcf2-49f1-8646-b4cd9eb01b17" (UID: "bc658535-dcf2-49f1-8646-b4cd9eb01b17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.054513 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.054558 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tbcb\" (UniqueName: \"kubernetes.io/projected/bc658535-dcf2-49f1-8646-b4cd9eb01b17-kube-api-access-6tbcb\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.054580 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc658535-dcf2-49f1-8646-b4cd9eb01b17-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.091309 4698 generic.go:334] "Generic (PLEG): container finished" podID="371eed8f-9f1c-4114-98c6-33c8abf3fa23" containerID="637cf72089a86de2619f2c046f9336774a8d7bc9c6acaa18d25230ad4be20128" exitCode=0 Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.091393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerDied","Data":"637cf72089a86de2619f2c046f9336774a8d7bc9c6acaa18d25230ad4be20128"} Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.097630 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" event={"ID":"db7dd36b-e7d3-4eed-b55f-cc3316be8e85","Type":"ContainerStarted","Data":"cbf756c5828eb78eada697ee6faf9bfa9769054a3b385627812b1f0ed2859e77"} Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.097859 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.102745 4698 generic.go:334] "Generic (PLEG): container finished" podID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerID="7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9" exitCode=0 Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.102874 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtjrp" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.102908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerDied","Data":"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9"} Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.103484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtjrp" event={"ID":"bc658535-dcf2-49f1-8646-b4cd9eb01b17","Type":"ContainerDied","Data":"12990a0755adaa7e4d7afeb3504b30bedc90e17cdcb3e0752231d4a62139c4e1"} Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.103531 4698 scope.go:117] "RemoveContainer" containerID="7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.149103 4698 scope.go:117] "RemoveContainer" containerID="5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.190225 4698 scope.go:117] "RemoveContainer" containerID="a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.206429 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" podStartSLOduration=1.554380267 podStartE2EDuration="9.206391601s" podCreationTimestamp="2025-10-14 10:10:41 +0000 UTC" firstStartedPulling="2025-10-14 10:10:42.010700625 +0000 UTC m=+823.708000041" lastFinishedPulling="2025-10-14 10:10:49.662711949 +0000 UTC m=+831.360011375" observedRunningTime="2025-10-14 10:10:50.167470916 +0000 UTC m=+831.864770352" watchObservedRunningTime="2025-10-14 10:10:50.206391601 +0000 UTC m=+831.903691017" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.206824 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.211586 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jtjrp"] Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.220086 4698 scope.go:117] "RemoveContainer" containerID="7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9" Oct 14 10:10:50 crc kubenswrapper[4698]: E1014 10:10:50.220883 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9\": container with ID starting with 7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9 not found: ID does not exist" containerID="7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.220940 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9"} err="failed to get container status \"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9\": rpc error: code = NotFound desc = could not find container \"7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9\": container with ID starting with 7cf82e4bade9b986f431d4c28eab8ef7403ca487db0a0c37d6846d88c11c0af9 not found: ID does not exist" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.220982 4698 scope.go:117] "RemoveContainer" containerID="5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1" Oct 14 10:10:50 crc kubenswrapper[4698]: E1014 10:10:50.221576 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1\": container with ID starting with 5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1 not found: ID does not exist" containerID="5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.221599 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1"} err="failed to get container status \"5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1\": rpc error: code = NotFound desc = could not find container \"5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1\": container with ID starting with 5b88b95b1bd7602068156602a4a1c4c2b24b1e1c2b729cfd1561c8ea417cccc1 not found: ID does not exist" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.221612 4698 scope.go:117] "RemoveContainer" containerID="a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43" Oct 14 10:10:50 crc kubenswrapper[4698]: E1014 10:10:50.221969 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43\": container with ID starting with a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43 not found: ID does not exist" containerID="a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43" Oct 14 10:10:50 crc kubenswrapper[4698]: I1014 10:10:50.221997 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43"} err="failed to get container status \"a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43\": rpc error: code = NotFound desc = could not find container \"a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43\": container with ID starting with a0e74f1775cbdae15431b7a296ceba98068a7f71919820fa521d595ab58eaa43 not found: ID does not exist" Oct 14 10:10:50 crc kubenswrapper[4698]: E1014 10:10:50.271896 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc658535_dcf2_49f1_8646_b4cd9eb01b17.slice\": RecentStats: unable to find data in memory cache]" Oct 14 10:10:51 crc kubenswrapper[4698]: I1014 10:10:51.025951 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" path="/var/lib/kubelet/pods/bc658535-dcf2-49f1-8646-b4cd9eb01b17/volumes" Oct 14 10:10:51 crc kubenswrapper[4698]: I1014 10:10:51.111628 4698 generic.go:334] "Generic (PLEG): container finished" podID="371eed8f-9f1c-4114-98c6-33c8abf3fa23" containerID="bac09ed0de48ca91c00e466e26b9be264275973e865839d907e76c2d8c1ca587" exitCode=0 Oct 14 10:10:51 crc kubenswrapper[4698]: I1014 10:10:51.111703 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerDied","Data":"bac09ed0de48ca91c00e466e26b9be264275973e865839d907e76c2d8c1ca587"} Oct 14 10:10:52 crc kubenswrapper[4698]: I1014 10:10:52.122432 4698 generic.go:334] "Generic (PLEG): container finished" podID="371eed8f-9f1c-4114-98c6-33c8abf3fa23" containerID="f12cf85e436f0d988b15f71659d3fbfc15f8eac0118a43d66f8da41644cace6f" exitCode=0 Oct 14 10:10:52 crc kubenswrapper[4698]: I1014 10:10:52.122536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerDied","Data":"f12cf85e436f0d988b15f71659d3fbfc15f8eac0118a43d66f8da41644cace6f"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.145629 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"5753839b2adb7db3c28fb95fd76305c8e9d86863fb7269f322515571e4d1f00a"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.145701 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"b85fd05818b52f2a3641311fc6f1b7e63c7d076981c544c13ae7dcae80d35275"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.145726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"930ef33ccf449ccc2f50f1ea56f929797b6bcb0d605fce03c89e0673e76cda9a"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.145749 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"fbc4803d3665917dc42a5e768c75f06cb49f095846dd9cbd1071d168a65926e8"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.145811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"0c82a8fbf8d505cf1a54e766b9816877838740054f799c7f80e5cb63cf8dd8da"} Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.446689 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-847mc" Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.908244 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.908345 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.908426 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.909496 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:10:53 crc kubenswrapper[4698]: I1014 10:10:53.909601 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7" gracePeriod=600 Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.175263 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wpbs6" event={"ID":"371eed8f-9f1c-4114-98c6-33c8abf3fa23","Type":"ContainerStarted","Data":"de10f18af65ca10450233c4abc30d2a357852f4ca02cfa443c682298ae95499c"} Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.176351 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.180014 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7" exitCode=0 Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.180128 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7"} Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.180225 4698 scope.go:117] "RemoveContainer" containerID="7a202e01825f368630a72ec8a287e248e2293fb7679cffb1159219e4901ff7f5" Oct 14 10:10:54 crc kubenswrapper[4698]: I1014 10:10:54.210333 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wpbs6" podStartSLOduration=5.216122312 podStartE2EDuration="13.210309577s" podCreationTimestamp="2025-10-14 10:10:41 +0000 UTC" firstStartedPulling="2025-10-14 10:10:41.695398766 +0000 UTC m=+823.392698182" lastFinishedPulling="2025-10-14 10:10:49.689586031 +0000 UTC m=+831.386885447" observedRunningTime="2025-10-14 10:10:54.201779205 +0000 UTC m=+835.899078631" watchObservedRunningTime="2025-10-14 10:10:54.210309577 +0000 UTC m=+835.907608993" Oct 14 10:10:55 crc kubenswrapper[4698]: I1014 10:10:55.189531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2"} Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.418857 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:10:56 crc kubenswrapper[4698]: E1014 10:10:56.419588 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="extract-content" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.419608 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="extract-content" Oct 14 10:10:56 crc kubenswrapper[4698]: E1014 10:10:56.419637 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="registry-server" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.419650 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="registry-server" Oct 14 10:10:56 crc kubenswrapper[4698]: E1014 10:10:56.419679 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="extract-utilities" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.419692 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="extract-utilities" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.420267 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc658535-dcf2-49f1-8646-b4cd9eb01b17" containerName="registry-server" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.420911 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.423823 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.424185 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.424367 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jrj7t" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.434946 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.555092 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6sq\" (UniqueName: \"kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq\") pod \"openstack-operator-index-2ft8j\" (UID: \"59cb9ede-d3f7-45e6-8a92-9a954ced7bea\") " pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.560711 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.596708 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.655905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b6sq\" (UniqueName: \"kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq\") pod \"openstack-operator-index-2ft8j\" (UID: \"59cb9ede-d3f7-45e6-8a92-9a954ced7bea\") " pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.678613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b6sq\" (UniqueName: \"kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq\") pod \"openstack-operator-index-2ft8j\" (UID: \"59cb9ede-d3f7-45e6-8a92-9a954ced7bea\") " pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:10:56 crc kubenswrapper[4698]: I1014 10:10:56.750261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:10:57 crc kubenswrapper[4698]: I1014 10:10:57.211822 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:10:57 crc kubenswrapper[4698]: W1014 10:10:57.227207 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59cb9ede_d3f7_45e6_8a92_9a954ced7bea.slice/crio-ba78530a1105dd9201408dcfd526478364fef8330e866d1254f232635ad95190 WatchSource:0}: Error finding container ba78530a1105dd9201408dcfd526478364fef8330e866d1254f232635ad95190: Status 404 returned error can't find the container with id ba78530a1105dd9201408dcfd526478364fef8330e866d1254f232635ad95190 Oct 14 10:10:58 crc kubenswrapper[4698]: I1014 10:10:58.212726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2ft8j" event={"ID":"59cb9ede-d3f7-45e6-8a92-9a954ced7bea","Type":"ContainerStarted","Data":"ba78530a1105dd9201408dcfd526478364fef8330e866d1254f232635ad95190"} Oct 14 10:10:59 crc kubenswrapper[4698]: I1014 10:10:59.746171 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.355477 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2wch9"] Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.356573 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.375991 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2wch9"] Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.515415 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmtr\" (UniqueName: \"kubernetes.io/projected/9e25898b-e095-4f25-be09-70befbd919b5-kube-api-access-msmtr\") pod \"openstack-operator-index-2wch9\" (UID: \"9e25898b-e095-4f25-be09-70befbd919b5\") " pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.617066 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmtr\" (UniqueName: \"kubernetes.io/projected/9e25898b-e095-4f25-be09-70befbd919b5-kube-api-access-msmtr\") pod \"openstack-operator-index-2wch9\" (UID: \"9e25898b-e095-4f25-be09-70befbd919b5\") " pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.663485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmtr\" (UniqueName: \"kubernetes.io/projected/9e25898b-e095-4f25-be09-70befbd919b5-kube-api-access-msmtr\") pod \"openstack-operator-index-2wch9\" (UID: \"9e25898b-e095-4f25-be09-70befbd919b5\") " pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:00 crc kubenswrapper[4698]: I1014 10:11:00.689452 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.237608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2ft8j" event={"ID":"59cb9ede-d3f7-45e6-8a92-9a954ced7bea","Type":"ContainerStarted","Data":"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b"} Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.237925 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2ft8j" podUID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" containerName="registry-server" containerID="cri-o://284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b" gracePeriod=2 Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.258654 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2ft8j" podStartSLOduration=1.622440527 podStartE2EDuration="5.258628956s" podCreationTimestamp="2025-10-14 10:10:56 +0000 UTC" firstStartedPulling="2025-10-14 10:10:57.230273145 +0000 UTC m=+838.927572571" lastFinishedPulling="2025-10-14 10:11:00.866461534 +0000 UTC m=+842.563761000" observedRunningTime="2025-10-14 10:11:01.255803495 +0000 UTC m=+842.953102971" watchObservedRunningTime="2025-10-14 10:11:01.258628956 +0000 UTC m=+842.955928412" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.300751 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2wch9"] Oct 14 10:11:01 crc kubenswrapper[4698]: W1014 10:11:01.355520 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e25898b_e095_4f25_be09_70befbd919b5.slice/crio-f9b3010c670f5214626f708b9b6782aec50a5089a8e398061a436fb712be3c96 WatchSource:0}: Error finding container f9b3010c670f5214626f708b9b6782aec50a5089a8e398061a436fb712be3c96: Status 404 returned error can't find the container with id f9b3010c670f5214626f708b9b6782aec50a5089a8e398061a436fb712be3c96 Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.576933 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wqqdm" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.659086 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-4rhbc" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.728646 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.845879 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b6sq\" (UniqueName: \"kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq\") pod \"59cb9ede-d3f7-45e6-8a92-9a954ced7bea\" (UID: \"59cb9ede-d3f7-45e6-8a92-9a954ced7bea\") " Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.852384 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq" (OuterVolumeSpecName: "kube-api-access-8b6sq") pod "59cb9ede-d3f7-45e6-8a92-9a954ced7bea" (UID: "59cb9ede-d3f7-45e6-8a92-9a954ced7bea"). InnerVolumeSpecName "kube-api-access-8b6sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:11:01 crc kubenswrapper[4698]: I1014 10:11:01.948552 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b6sq\" (UniqueName: \"kubernetes.io/projected/59cb9ede-d3f7-45e6-8a92-9a954ced7bea-kube-api-access-8b6sq\") on node \"crc\" DevicePath \"\"" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.250813 4698 generic.go:334] "Generic (PLEG): container finished" podID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" containerID="284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b" exitCode=0 Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.250945 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2ft8j" event={"ID":"59cb9ede-d3f7-45e6-8a92-9a954ced7bea","Type":"ContainerDied","Data":"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b"} Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.251045 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2ft8j" event={"ID":"59cb9ede-d3f7-45e6-8a92-9a954ced7bea","Type":"ContainerDied","Data":"ba78530a1105dd9201408dcfd526478364fef8330e866d1254f232635ad95190"} Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.251056 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2ft8j" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.251086 4698 scope.go:117] "RemoveContainer" containerID="284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.253648 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2wch9" event={"ID":"9e25898b-e095-4f25-be09-70befbd919b5","Type":"ContainerStarted","Data":"b50c9dd863588fefe789f1c713d17709c50afa2b684fa971ca85e913cfc73eb9"} Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.254507 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2wch9" event={"ID":"9e25898b-e095-4f25-be09-70befbd919b5","Type":"ContainerStarted","Data":"f9b3010c670f5214626f708b9b6782aec50a5089a8e398061a436fb712be3c96"} Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.281533 4698 scope.go:117] "RemoveContainer" containerID="284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b" Oct 14 10:11:02 crc kubenswrapper[4698]: E1014 10:11:02.282452 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b\": container with ID starting with 284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b not found: ID does not exist" containerID="284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.282503 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b"} err="failed to get container status \"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b\": rpc error: code = NotFound desc = could not find container \"284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b\": container with ID starting with 284aec73561dbc8e2c721662bb83202d48b2c7fec119921e4ecf7ca1c1ce227b not found: ID does not exist" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.283792 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2wch9" podStartSLOduration=2.224294446 podStartE2EDuration="2.283743563s" podCreationTimestamp="2025-10-14 10:11:00 +0000 UTC" firstStartedPulling="2025-10-14 10:11:01.362844794 +0000 UTC m=+843.060144240" lastFinishedPulling="2025-10-14 10:11:01.422293941 +0000 UTC m=+843.119593357" observedRunningTime="2025-10-14 10:11:02.281278213 +0000 UTC m=+843.978577659" watchObservedRunningTime="2025-10-14 10:11:02.283743563 +0000 UTC m=+843.981043019" Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.303567 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:11:02 crc kubenswrapper[4698]: I1014 10:11:02.311308 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2ft8j"] Oct 14 10:11:03 crc kubenswrapper[4698]: I1014 10:11:03.030295 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" path="/var/lib/kubelet/pods/59cb9ede-d3f7-45e6-8a92-9a954ced7bea/volumes" Oct 14 10:11:10 crc kubenswrapper[4698]: I1014 10:11:10.690509 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:10 crc kubenswrapper[4698]: I1014 10:11:10.691005 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:10 crc kubenswrapper[4698]: I1014 10:11:10.729522 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:11 crc kubenswrapper[4698]: I1014 10:11:11.365221 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2wch9" Oct 14 10:11:11 crc kubenswrapper[4698]: I1014 10:11:11.563717 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wpbs6" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.002512 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv"] Oct 14 10:11:13 crc kubenswrapper[4698]: E1014 10:11:13.002882 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" containerName="registry-server" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.002902 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" containerName="registry-server" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.003103 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cb9ede-d3f7-45e6-8a92-9a954ced7bea" containerName="registry-server" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.004480 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.010250 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vdntm" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.015432 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv"] Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.131071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.131402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqch\" (UniqueName: \"kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.131507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.232922 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.232977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqch\" (UniqueName: \"kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.233078 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.233723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.233949 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.263339 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqch\" (UniqueName: \"kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch\") pod \"29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.394601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:13 crc kubenswrapper[4698]: I1014 10:11:13.929373 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv"] Oct 14 10:11:13 crc kubenswrapper[4698]: W1014 10:11:13.938665 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc2cb835_38cd_43b4_bf06_15d3ccc7ed5a.slice/crio-ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476 WatchSource:0}: Error finding container ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476: Status 404 returned error can't find the container with id ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476 Oct 14 10:11:14 crc kubenswrapper[4698]: I1014 10:11:14.353462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" event={"ID":"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a","Type":"ContainerDied","Data":"eaa68dec37dabf628aa667021ef5746e8d60d2c0b263c75210725fa1c31326f2"} Oct 14 10:11:14 crc kubenswrapper[4698]: I1014 10:11:14.354069 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerID="eaa68dec37dabf628aa667021ef5746e8d60d2c0b263c75210725fa1c31326f2" exitCode=0 Oct 14 10:11:14 crc kubenswrapper[4698]: I1014 10:11:14.354116 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" event={"ID":"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a","Type":"ContainerStarted","Data":"ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476"} Oct 14 10:11:15 crc kubenswrapper[4698]: I1014 10:11:15.365292 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerID="a12d6bf31670fed7e2340440e6581d24d241edb3145f27cdc65383f83d9e43b6" exitCode=0 Oct 14 10:11:15 crc kubenswrapper[4698]: I1014 10:11:15.365354 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" event={"ID":"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a","Type":"ContainerDied","Data":"a12d6bf31670fed7e2340440e6581d24d241edb3145f27cdc65383f83d9e43b6"} Oct 14 10:11:16 crc kubenswrapper[4698]: I1014 10:11:16.378788 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerID="7100dc101becd08cbf29a91c8c0154ecea0357cca66f6adbb0be5f994b1520ce" exitCode=0 Oct 14 10:11:16 crc kubenswrapper[4698]: I1014 10:11:16.378851 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" event={"ID":"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a","Type":"ContainerDied","Data":"7100dc101becd08cbf29a91c8c0154ecea0357cca66f6adbb0be5f994b1520ce"} Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.832332 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.953964 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqch\" (UniqueName: \"kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch\") pod \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.954033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle\") pod \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.954078 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util\") pod \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\" (UID: \"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a\") " Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.955441 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle" (OuterVolumeSpecName: "bundle") pod "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" (UID: "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.962352 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch" (OuterVolumeSpecName: "kube-api-access-dzqch") pod "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" (UID: "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a"). InnerVolumeSpecName "kube-api-access-dzqch". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:11:17 crc kubenswrapper[4698]: I1014 10:11:17.968545 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util" (OuterVolumeSpecName: "util") pod "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" (UID: "fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.055877 4698 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-util\") on node \"crc\" DevicePath \"\"" Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.055934 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqch\" (UniqueName: \"kubernetes.io/projected/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-kube-api-access-dzqch\") on node \"crc\" DevicePath \"\"" Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.055947 4698 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.397253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" event={"ID":"fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a","Type":"ContainerDied","Data":"ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476"} Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.397294 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9ce01cf81091e0926bf28591cea14b005799279634d6c5f5b4c651bb8f4476" Oct 14 10:11:18 crc kubenswrapper[4698]: I1014 10:11:18.397351 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.666141 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564"] Oct 14 10:11:21 crc kubenswrapper[4698]: E1014 10:11:21.666909 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="pull" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.666922 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="pull" Oct 14 10:11:21 crc kubenswrapper[4698]: E1014 10:11:21.666947 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="util" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.666953 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="util" Oct 14 10:11:21 crc kubenswrapper[4698]: E1014 10:11:21.666961 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="extract" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.666967 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="extract" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.667072 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a" containerName="extract" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.667688 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.670706 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xlhfr" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.758444 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564"] Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.818390 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5jp\" (UniqueName: \"kubernetes.io/projected/8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1-kube-api-access-9j5jp\") pod \"openstack-operator-controller-operator-7fc68b75ff-gn564\" (UID: \"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1\") " pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.919860 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j5jp\" (UniqueName: \"kubernetes.io/projected/8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1-kube-api-access-9j5jp\") pod \"openstack-operator-controller-operator-7fc68b75ff-gn564\" (UID: \"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1\") " pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.938161 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j5jp\" (UniqueName: \"kubernetes.io/projected/8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1-kube-api-access-9j5jp\") pod \"openstack-operator-controller-operator-7fc68b75ff-gn564\" (UID: \"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1\") " pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:21 crc kubenswrapper[4698]: I1014 10:11:21.986185 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:22 crc kubenswrapper[4698]: I1014 10:11:22.430893 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564"] Oct 14 10:11:22 crc kubenswrapper[4698]: W1014 10:11:22.437187 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9ffe4b_a420_4553_b9d1_90fbc4ed2fb1.slice/crio-f51902c678fc8c3bf58a1d7d2825513a260659121dad0fa1b18b83b26f8ddc63 WatchSource:0}: Error finding container f51902c678fc8c3bf58a1d7d2825513a260659121dad0fa1b18b83b26f8ddc63: Status 404 returned error can't find the container with id f51902c678fc8c3bf58a1d7d2825513a260659121dad0fa1b18b83b26f8ddc63 Oct 14 10:11:23 crc kubenswrapper[4698]: I1014 10:11:23.428963 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" event={"ID":"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1","Type":"ContainerStarted","Data":"f51902c678fc8c3bf58a1d7d2825513a260659121dad0fa1b18b83b26f8ddc63"} Oct 14 10:11:26 crc kubenswrapper[4698]: I1014 10:11:26.522135 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:11:27 crc kubenswrapper[4698]: I1014 10:11:27.482916 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" event={"ID":"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1","Type":"ContainerStarted","Data":"62417f51fa03a19e8c32ea59bc497f3d2517f54d4ab351575766d8e3e650ce1a"} Oct 14 10:11:29 crc kubenswrapper[4698]: I1014 10:11:29.496982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" event={"ID":"8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1","Type":"ContainerStarted","Data":"326f3330e78de34fe98a04ea783c74cb80ac169b1bd939e0f77c68ebd9ca4aea"} Oct 14 10:11:29 crc kubenswrapper[4698]: I1014 10:11:29.497335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:31 crc kubenswrapper[4698]: I1014 10:11:31.989877 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" Oct 14 10:11:32 crc kubenswrapper[4698]: I1014 10:11:32.043631 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7fc68b75ff-gn564" podStartSLOduration=4.445673539 podStartE2EDuration="11.043605053s" podCreationTimestamp="2025-10-14 10:11:21 +0000 UTC" firstStartedPulling="2025-10-14 10:11:22.439350937 +0000 UTC m=+864.136650353" lastFinishedPulling="2025-10-14 10:11:29.037282441 +0000 UTC m=+870.734581867" observedRunningTime="2025-10-14 10:11:29.538110977 +0000 UTC m=+871.235410403" watchObservedRunningTime="2025-10-14 10:11:32.043605053 +0000 UTC m=+873.740904509" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.124317 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.126419 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.129558 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.130917 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.132885 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-s8dwq" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.138022 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.143127 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.149604 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2dgh4" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.153960 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.155195 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.160475 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s4mm8" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.164849 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.186988 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.188405 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.192701 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vklrf" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.219858 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.224860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcc4c\" (UniqueName: \"kubernetes.io/projected/6d1a4e09-e83d-4634-ae32-b37666d65f61-kube-api-access-dcc4c\") pod \"designate-operator-controller-manager-687df44cdb-nh4nc\" (UID: \"6d1a4e09-e83d-4634-ae32-b37666d65f61\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.224907 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcczh\" (UniqueName: \"kubernetes.io/projected/24d6e9c5-aad5-4856-a7b7-20e04553c864-kube-api-access-jcczh\") pod \"barbican-operator-controller-manager-64f84fcdbb-5wlw6\" (UID: \"24d6e9c5-aad5-4856-a7b7-20e04553c864\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.224932 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbfw\" (UniqueName: \"kubernetes.io/projected/b310d6c3-527e-4a58-bc98-edcd7731b9e3-kube-api-access-dcbfw\") pod \"glance-operator-controller-manager-7bb46cd7d-f6jr7\" (UID: \"b310d6c3-527e-4a58-bc98-edcd7731b9e3\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.225010 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrm59\" (UniqueName: \"kubernetes.io/projected/f91fec87-379e-4c52-9d03-b56841232184-kube-api-access-zrm59\") pod \"cinder-operator-controller-manager-59cdc64769-nh6c5\" (UID: \"f91fec87-379e-4c52-9d03-b56841232184\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.225847 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.226988 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.234188 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rwmx2" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.236590 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.238112 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.243101 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.249754 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p9spm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.253898 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.255710 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.259045 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.263945 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s9bfk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.271119 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.280850 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.286623 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.288097 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.291108 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-crj9s" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.297353 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.298877 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.304529 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7995x" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.305697 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.319060 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326573 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xt57\" (UniqueName: \"kubernetes.io/projected/2547a997-b2ba-4300-92ed-09ccc57499c7-kube-api-access-7xt57\") pod \"ironic-operator-controller-manager-74cb5cbc49-d9qkb\" (UID: \"2547a997-b2ba-4300-92ed-09ccc57499c7\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrm59\" (UniqueName: \"kubernetes.io/projected/f91fec87-379e-4c52-9d03-b56841232184-kube-api-access-zrm59\") pod \"cinder-operator-controller-manager-59cdc64769-nh6c5\" (UID: \"f91fec87-379e-4c52-9d03-b56841232184\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5qnc\" (UniqueName: \"kubernetes.io/projected/3482400d-0e9f-4dc5-883f-36313dc33944-kube-api-access-t5qnc\") pod \"horizon-operator-controller-manager-6d74794d9b-nq8vb\" (UID: \"3482400d-0e9f-4dc5-883f-36313dc33944\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326741 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flpc\" (UniqueName: \"kubernetes.io/projected/004d5489-901d-4fd3-9fc3-ae0016255950-kube-api-access-8flpc\") pod \"heat-operator-controller-manager-6d9967f8dd-52h2t\" (UID: \"004d5489-901d-4fd3-9fc3-ae0016255950\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmpcs\" (UniqueName: \"kubernetes.io/projected/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-kube-api-access-fmpcs\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326867 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcc4c\" (UniqueName: \"kubernetes.io/projected/6d1a4e09-e83d-4634-ae32-b37666d65f61-kube-api-access-dcc4c\") pod \"designate-operator-controller-manager-687df44cdb-nh4nc\" (UID: \"6d1a4e09-e83d-4634-ae32-b37666d65f61\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326890 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcczh\" (UniqueName: \"kubernetes.io/projected/24d6e9c5-aad5-4856-a7b7-20e04553c864-kube-api-access-jcczh\") pod \"barbican-operator-controller-manager-64f84fcdbb-5wlw6\" (UID: \"24d6e9c5-aad5-4856-a7b7-20e04553c864\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.326916 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcbfw\" (UniqueName: \"kubernetes.io/projected/b310d6c3-527e-4a58-bc98-edcd7731b9e3-kube-api-access-dcbfw\") pod \"glance-operator-controller-manager-7bb46cd7d-f6jr7\" (UID: \"b310d6c3-527e-4a58-bc98-edcd7731b9e3\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.337377 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.338814 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.343801 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lbr57" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.368599 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.369715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.370863 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.372508 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcc4c\" (UniqueName: \"kubernetes.io/projected/6d1a4e09-e83d-4634-ae32-b37666d65f61-kube-api-access-dcc4c\") pod \"designate-operator-controller-manager-687df44cdb-nh4nc\" (UID: \"6d1a4e09-e83d-4634-ae32-b37666d65f61\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.376077 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dzbnh" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.376333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrm59\" (UniqueName: \"kubernetes.io/projected/f91fec87-379e-4c52-9d03-b56841232184-kube-api-access-zrm59\") pod \"cinder-operator-controller-manager-59cdc64769-nh6c5\" (UID: \"f91fec87-379e-4c52-9d03-b56841232184\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.378276 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcczh\" (UniqueName: \"kubernetes.io/projected/24d6e9c5-aad5-4856-a7b7-20e04553c864-kube-api-access-jcczh\") pod \"barbican-operator-controller-manager-64f84fcdbb-5wlw6\" (UID: \"24d6e9c5-aad5-4856-a7b7-20e04553c864\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.383913 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.384695 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcbfw\" (UniqueName: \"kubernetes.io/projected/b310d6c3-527e-4a58-bc98-edcd7731b9e3-kube-api-access-dcbfw\") pod \"glance-operator-controller-manager-7bb46cd7d-f6jr7\" (UID: \"b310d6c3-527e-4a58-bc98-edcd7731b9e3\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.409838 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.411370 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.414216 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mll7j" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.421228 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428647 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94qq\" (UniqueName: \"kubernetes.io/projected/f55ae8f2-2a7c-4158-b125-2121c37fc874-kube-api-access-v94qq\") pod \"manila-operator-controller-manager-59578bc799-nb7fk\" (UID: \"f55ae8f2-2a7c-4158-b125-2121c37fc874\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428741 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5qnc\" (UniqueName: \"kubernetes.io/projected/3482400d-0e9f-4dc5-883f-36313dc33944-kube-api-access-t5qnc\") pod \"horizon-operator-controller-manager-6d74794d9b-nq8vb\" (UID: \"3482400d-0e9f-4dc5-883f-36313dc33944\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428824 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfcb\" (UniqueName: \"kubernetes.io/projected/d6a101ad-e350-4964-a786-91072a6776e8-kube-api-access-wkfcb\") pod \"keystone-operator-controller-manager-ddb98f99b-ks5tw\" (UID: \"d6a101ad-e350-4964-a786-91072a6776e8\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428881 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbv2c\" (UniqueName: \"kubernetes.io/projected/cca3d0fd-d9aa-428f-95f2-14238b7cf627-kube-api-access-qbv2c\") pod \"mariadb-operator-controller-manager-5777b4f897-nds58\" (UID: \"cca3d0fd-d9aa-428f-95f2-14238b7cf627\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428909 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flpc\" (UniqueName: \"kubernetes.io/projected/004d5489-901d-4fd3-9fc3-ae0016255950-kube-api-access-8flpc\") pod \"heat-operator-controller-manager-6d9967f8dd-52h2t\" (UID: \"004d5489-901d-4fd3-9fc3-ae0016255950\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.428965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmpcs\" (UniqueName: \"kubernetes.io/projected/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-kube-api-access-fmpcs\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.429010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.429132 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xt57\" (UniqueName: \"kubernetes.io/projected/2547a997-b2ba-4300-92ed-09ccc57499c7-kube-api-access-7xt57\") pod \"ironic-operator-controller-manager-74cb5cbc49-d9qkb\" (UID: \"2547a997-b2ba-4300-92ed-09ccc57499c7\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:11:49 crc kubenswrapper[4698]: E1014 10:11:49.430004 4698 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 14 10:11:49 crc kubenswrapper[4698]: E1014 10:11:49.430057 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert podName:4ea0ebfe-fbe9-428c-baf6-565e4dbb9044 nodeName:}" failed. No retries permitted until 2025-10-14 10:11:49.930035096 +0000 UTC m=+891.627334512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert") pod "infra-operator-controller-manager-585fc5b659-2jvxv" (UID: "4ea0ebfe-fbe9-428c-baf6-565e4dbb9044") : secret "infra-operator-webhook-server-cert" not found Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.438957 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.440142 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.447060 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-x57s5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.447954 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.450475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.453380 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.454305 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.459105 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.471505 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.479922 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xt57\" (UniqueName: \"kubernetes.io/projected/2547a997-b2ba-4300-92ed-09ccc57499c7-kube-api-access-7xt57\") pod \"ironic-operator-controller-manager-74cb5cbc49-d9qkb\" (UID: \"2547a997-b2ba-4300-92ed-09ccc57499c7\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.480139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.487976 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7blgg" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.490759 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5qnc\" (UniqueName: \"kubernetes.io/projected/3482400d-0e9f-4dc5-883f-36313dc33944-kube-api-access-t5qnc\") pod \"horizon-operator-controller-manager-6d74794d9b-nq8vb\" (UID: \"3482400d-0e9f-4dc5-883f-36313dc33944\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.507314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flpc\" (UniqueName: \"kubernetes.io/projected/004d5489-901d-4fd3-9fc3-ae0016255950-kube-api-access-8flpc\") pod \"heat-operator-controller-manager-6d9967f8dd-52h2t\" (UID: \"004d5489-901d-4fd3-9fc3-ae0016255950\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.520730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.538576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwx4b\" (UniqueName: \"kubernetes.io/projected/442ecb91-0479-42a8-94ba-5be7d8cea79f-kube-api-access-fwx4b\") pod \"octavia-operator-controller-manager-6d7c7ddf95-zfmdv\" (UID: \"442ecb91-0479-42a8-94ba-5be7d8cea79f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.538919 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5st\" (UniqueName: \"kubernetes.io/projected/84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7-kube-api-access-qt5st\") pod \"nova-operator-controller-manager-57bb74c7bf-c49pm\" (UID: \"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.539038 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v94qq\" (UniqueName: \"kubernetes.io/projected/f55ae8f2-2a7c-4158-b125-2121c37fc874-kube-api-access-v94qq\") pod \"manila-operator-controller-manager-59578bc799-nb7fk\" (UID: \"f55ae8f2-2a7c-4158-b125-2121c37fc874\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.539097 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2dj5\" (UniqueName: \"kubernetes.io/projected/0660342f-b230-41a7-a2f8-44cd75696095-kube-api-access-f2dj5\") pod \"neutron-operator-controller-manager-797d478b46-kmzfd\" (UID: \"0660342f-b230-41a7-a2f8-44cd75696095\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.539132 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfcb\" (UniqueName: \"kubernetes.io/projected/d6a101ad-e350-4964-a786-91072a6776e8-kube-api-access-wkfcb\") pod \"keystone-operator-controller-manager-ddb98f99b-ks5tw\" (UID: \"d6a101ad-e350-4964-a786-91072a6776e8\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.539163 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbv2c\" (UniqueName: \"kubernetes.io/projected/cca3d0fd-d9aa-428f-95f2-14238b7cf627-kube-api-access-qbv2c\") pod \"mariadb-operator-controller-manager-5777b4f897-nds58\" (UID: \"cca3d0fd-d9aa-428f-95f2-14238b7cf627\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.548501 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmpcs\" (UniqueName: \"kubernetes.io/projected/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-kube-api-access-fmpcs\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.556628 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.565291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v94qq\" (UniqueName: \"kubernetes.io/projected/f55ae8f2-2a7c-4158-b125-2121c37fc874-kube-api-access-v94qq\") pod \"manila-operator-controller-manager-59578bc799-nb7fk\" (UID: \"f55ae8f2-2a7c-4158-b125-2121c37fc874\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.565734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbv2c\" (UniqueName: \"kubernetes.io/projected/cca3d0fd-d9aa-428f-95f2-14238b7cf627-kube-api-access-qbv2c\") pod \"mariadb-operator-controller-manager-5777b4f897-nds58\" (UID: \"cca3d0fd-d9aa-428f-95f2-14238b7cf627\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.566862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.572820 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfcb\" (UniqueName: \"kubernetes.io/projected/d6a101ad-e350-4964-a786-91072a6776e8-kube-api-access-wkfcb\") pod \"keystone-operator-controller-manager-ddb98f99b-ks5tw\" (UID: \"d6a101ad-e350-4964-a786-91072a6776e8\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.578300 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.613465 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.634818 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.640458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2dj5\" (UniqueName: \"kubernetes.io/projected/0660342f-b230-41a7-a2f8-44cd75696095-kube-api-access-f2dj5\") pod \"neutron-operator-controller-manager-797d478b46-kmzfd\" (UID: \"0660342f-b230-41a7-a2f8-44cd75696095\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.640565 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwx4b\" (UniqueName: \"kubernetes.io/projected/442ecb91-0479-42a8-94ba-5be7d8cea79f-kube-api-access-fwx4b\") pod \"octavia-operator-controller-manager-6d7c7ddf95-zfmdv\" (UID: \"442ecb91-0479-42a8-94ba-5be7d8cea79f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.640619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5st\" (UniqueName: \"kubernetes.io/projected/84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7-kube-api-access-qt5st\") pod \"nova-operator-controller-manager-57bb74c7bf-c49pm\" (UID: \"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.649005 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.650661 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.656664 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.657930 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-h22q5" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.662575 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwx4b\" (UniqueName: \"kubernetes.io/projected/442ecb91-0479-42a8-94ba-5be7d8cea79f-kube-api-access-fwx4b\") pod \"octavia-operator-controller-manager-6d7c7ddf95-zfmdv\" (UID: \"442ecb91-0479-42a8-94ba-5be7d8cea79f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.664985 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2dj5\" (UniqueName: \"kubernetes.io/projected/0660342f-b230-41a7-a2f8-44cd75696095-kube-api-access-f2dj5\") pod \"neutron-operator-controller-manager-797d478b46-kmzfd\" (UID: \"0660342f-b230-41a7-a2f8-44cd75696095\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.677012 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.678604 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.682878 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5st\" (UniqueName: \"kubernetes.io/projected/84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7-kube-api-access-qt5st\") pod \"nova-operator-controller-manager-57bb74c7bf-c49pm\" (UID: \"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.688632 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.691896 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.698924 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-579tp" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.699593 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dwlk6" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.703031 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.716959 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.735390 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.736933 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.742014 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t67z\" (UniqueName: \"kubernetes.io/projected/3e3e37b3-e0ed-479a-9124-aa6c814a1030-kube-api-access-2t67z\") pod \"ovn-operator-controller-manager-869cc7797f-xpv9w\" (UID: \"3e3e37b3-e0ed-479a-9124-aa6c814a1030\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.742068 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.742133 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clgr\" (UniqueName: \"kubernetes.io/projected/4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a-kube-api-access-7clgr\") pod \"placement-operator-controller-manager-664664cb68-wdrpw\" (UID: \"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.742189 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpl9\" (UniqueName: \"kubernetes.io/projected/2486fbf6-b25f-4bc3-932d-5ade782da654-kube-api-access-4qpl9\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.751391 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7skrn" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.768788 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.772003 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.776228 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b4rxj" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.789509 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.795878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.821050 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.845582 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxfr\" (UniqueName: \"kubernetes.io/projected/e93508a8-6ee5-4950-8cea-7c3599b7e1ec-kube-api-access-qjxfr\") pod \"telemetry-operator-controller-manager-578874c84d-xbvhq\" (UID: \"e93508a8-6ee5-4950-8cea-7c3599b7e1ec\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.845694 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.845844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4ml\" (UniqueName: \"kubernetes.io/projected/28b97988-e327-4c7a-aab5-5985bf4a675d-kube-api-access-6f4ml\") pod \"swift-operator-controller-manager-5f4d5dfdc6-7p5xj\" (UID: \"28b97988-e327-4c7a-aab5-5985bf4a675d\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.845869 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7clgr\" (UniqueName: \"kubernetes.io/projected/4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a-kube-api-access-7clgr\") pod \"placement-operator-controller-manager-664664cb68-wdrpw\" (UID: \"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.846027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpl9\" (UniqueName: \"kubernetes.io/projected/2486fbf6-b25f-4bc3-932d-5ade782da654-kube-api-access-4qpl9\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: E1014 10:11:49.846990 4698 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 10:11:49 crc kubenswrapper[4698]: E1014 10:11:49.847097 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert podName:2486fbf6-b25f-4bc3-932d-5ade782da654 nodeName:}" failed. No retries permitted until 2025-10-14 10:11:50.347071623 +0000 UTC m=+892.044371039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" (UID: "2486fbf6-b25f-4bc3-932d-5ade782da654") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.847011 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t67z\" (UniqueName: \"kubernetes.io/projected/3e3e37b3-e0ed-479a-9124-aa6c814a1030-kube-api-access-2t67z\") pod \"ovn-operator-controller-manager-869cc7797f-xpv9w\" (UID: \"3e3e37b3-e0ed-479a-9124-aa6c814a1030\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.851282 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.852844 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.857514 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v5286" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.858252 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.876899 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.877428 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7clgr\" (UniqueName: \"kubernetes.io/projected/4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a-kube-api-access-7clgr\") pod \"placement-operator-controller-manager-664664cb68-wdrpw\" (UID: \"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.900599 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpl9\" (UniqueName: \"kubernetes.io/projected/2486fbf6-b25f-4bc3-932d-5ade782da654-kube-api-access-4qpl9\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.903900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t67z\" (UniqueName: \"kubernetes.io/projected/3e3e37b3-e0ed-479a-9124-aa6c814a1030-kube-api-access-2t67z\") pod \"ovn-operator-controller-manager-869cc7797f-xpv9w\" (UID: \"3e3e37b3-e0ed-479a-9124-aa6c814a1030\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.904291 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.917941 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.919319 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.923511 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c5pb8" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.923574 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.926502 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.936209 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948494 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948525 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f4ml\" (UniqueName: \"kubernetes.io/projected/28b97988-e327-4c7a-aab5-5985bf4a675d-kube-api-access-6f4ml\") pod \"swift-operator-controller-manager-5f4d5dfdc6-7p5xj\" (UID: \"28b97988-e327-4c7a-aab5-5985bf4a675d\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948633 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxfr\" (UniqueName: \"kubernetes.io/projected/e93508a8-6ee5-4950-8cea-7c3599b7e1ec-kube-api-access-qjxfr\") pod \"telemetry-operator-controller-manager-578874c84d-xbvhq\" (UID: \"e93508a8-6ee5-4950-8cea-7c3599b7e1ec\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948708 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl2b\" (UniqueName: \"kubernetes.io/projected/5b887a4b-1049-4b80-8613-89ef2f446df4-kube-api-access-9wl2b\") pod \"watcher-operator-controller-manager-646675d848-n4xkf\" (UID: \"5b887a4b-1049-4b80-8613-89ef2f446df4\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.948745 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctvv\" (UniqueName: \"kubernetes.io/projected/a969812c-8490-4e43-ab00-73c8254c5b21-kube-api-access-bctvv\") pod \"test-operator-controller-manager-ffcdd6c94-g5fmn\" (UID: \"a969812c-8490-4e43-ab00-73c8254c5b21\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.949588 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.957268 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.958725 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c7pd9" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.958888 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79"] Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.973825 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ea0ebfe-fbe9-428c-baf6-565e4dbb9044-cert\") pod \"infra-operator-controller-manager-585fc5b659-2jvxv\" (UID: \"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.979351 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxfr\" (UniqueName: \"kubernetes.io/projected/e93508a8-6ee5-4950-8cea-7c3599b7e1ec-kube-api-access-qjxfr\") pod \"telemetry-operator-controller-manager-578874c84d-xbvhq\" (UID: \"e93508a8-6ee5-4950-8cea-7c3599b7e1ec\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:11:49 crc kubenswrapper[4698]: I1014 10:11:49.986998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f4ml\" (UniqueName: \"kubernetes.io/projected/28b97988-e327-4c7a-aab5-5985bf4a675d-kube-api-access-6f4ml\") pod \"swift-operator-controller-manager-5f4d5dfdc6-7p5xj\" (UID: \"28b97988-e327-4c7a-aab5-5985bf4a675d\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.010902 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.012092 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.016713 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zm6ff" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.055851 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.056995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8pc\" (UniqueName: \"kubernetes.io/projected/052a38cb-bdfa-46de-ab53-e81b2f014b1d-kube-api-access-mv8pc\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zctmt\" (UID: \"052a38cb-bdfa-46de-ab53-e81b2f014b1d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.057213 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxnfx\" (UniqueName: \"kubernetes.io/projected/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-kube-api-access-jxnfx\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.059836 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.059947 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl2b\" (UniqueName: \"kubernetes.io/projected/5b887a4b-1049-4b80-8613-89ef2f446df4-kube-api-access-9wl2b\") pod \"watcher-operator-controller-manager-646675d848-n4xkf\" (UID: \"5b887a4b-1049-4b80-8613-89ef2f446df4\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.060042 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctvv\" (UniqueName: \"kubernetes.io/projected/a969812c-8490-4e43-ab00-73c8254c5b21-kube-api-access-bctvv\") pod \"test-operator-controller-manager-ffcdd6c94-g5fmn\" (UID: \"a969812c-8490-4e43-ab00-73c8254c5b21\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.077417 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.090672 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.093278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl2b\" (UniqueName: \"kubernetes.io/projected/5b887a4b-1049-4b80-8613-89ef2f446df4-kube-api-access-9wl2b\") pod \"watcher-operator-controller-manager-646675d848-n4xkf\" (UID: \"5b887a4b-1049-4b80-8613-89ef2f446df4\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.093478 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.120907 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctvv\" (UniqueName: \"kubernetes.io/projected/a969812c-8490-4e43-ab00-73c8254c5b21-kube-api-access-bctvv\") pod \"test-operator-controller-manager-ffcdd6c94-g5fmn\" (UID: \"a969812c-8490-4e43-ab00-73c8254c5b21\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.174152 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv8pc\" (UniqueName: \"kubernetes.io/projected/052a38cb-bdfa-46de-ab53-e81b2f014b1d-kube-api-access-mv8pc\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zctmt\" (UID: \"052a38cb-bdfa-46de-ab53-e81b2f014b1d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.174285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxnfx\" (UniqueName: \"kubernetes.io/projected/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-kube-api-access-jxnfx\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.174323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: E1014 10:11:50.174572 4698 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 14 10:11:50 crc kubenswrapper[4698]: E1014 10:11:50.174642 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert podName:ecf62cd7-15b2-4bcc-aadd-1c982c7149e7 nodeName:}" failed. No retries permitted until 2025-10-14 10:11:50.674620239 +0000 UTC m=+892.371919655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert") pod "openstack-operator-controller-manager-768cc76f8b-7jr79" (UID: "ecf62cd7-15b2-4bcc-aadd-1c982c7149e7") : secret "webhook-server-cert" not found Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.180730 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.180960 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.202149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv8pc\" (UniqueName: \"kubernetes.io/projected/052a38cb-bdfa-46de-ab53-e81b2f014b1d-kube-api-access-mv8pc\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zctmt\" (UID: \"052a38cb-bdfa-46de-ab53-e81b2f014b1d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.216025 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxnfx\" (UniqueName: \"kubernetes.io/projected/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-kube-api-access-jxnfx\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.250513 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.276749 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.378986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.383435 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.391007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2486fbf6-b25f-4bc3-932d-5ade782da654-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc\" (UID: \"2486fbf6-b25f-4bc3-932d-5ade782da654\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.456603 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.494182 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.586236 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.652431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" event={"ID":"f91fec87-379e-4c52-9d03-b56841232184","Type":"ContainerStarted","Data":"fcd875b0912cbe458c3df0a7cd6a9b25223513580f3fbe3a1d3207ac0e051ecd"} Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.660160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" event={"ID":"24d6e9c5-aad5-4856-a7b7-20e04553c864","Type":"ContainerStarted","Data":"7c7cb1a6829556018ebe50b76f8dd573f1e4389091c3f28fa59dd35681382498"} Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.667077 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.671798 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc"] Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.690100 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.699737 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecf62cd7-15b2-4bcc-aadd-1c982c7149e7-cert\") pod \"openstack-operator-controller-manager-768cc76f8b-7jr79\" (UID: \"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7\") " pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.824533 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t"] Oct 14 10:11:50 crc kubenswrapper[4698]: W1014 10:11:50.834647 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod004d5489_901d_4fd3_9fc3_ae0016255950.slice/crio-9cd2645c030d5492e6c8b7c7772c7da6e05a42062c86749912d7111373add83a WatchSource:0}: Error finding container 9cd2645c030d5492e6c8b7c7772c7da6e05a42062c86749912d7111373add83a: Status 404 returned error can't find the container with id 9cd2645c030d5492e6c8b7c7772c7da6e05a42062c86749912d7111373add83a Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.899383 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:50 crc kubenswrapper[4698]: I1014 10:11:50.958630 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv"] Oct 14 10:11:50 crc kubenswrapper[4698]: W1014 10:11:50.970070 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod442ecb91_0479_42a8_94ba_5be7d8cea79f.slice/crio-64ff0afdcc7c0a17c4613c70fb1020bc00fc0eeaf6501b5d78c2859c4c70dd1a WatchSource:0}: Error finding container 64ff0afdcc7c0a17c4613c70fb1020bc00fc0eeaf6501b5d78c2859c4c70dd1a: Status 404 returned error can't find the container with id 64ff0afdcc7c0a17c4613c70fb1020bc00fc0eeaf6501b5d78c2859c4c70dd1a Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.133228 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.139671 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.186679 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.193112 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.224653 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.247756 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.256545 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.257836 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw"] Oct 14 10:11:51 crc kubenswrapper[4698]: W1014 10:11:51.264502 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca3d0fd_d9aa_428f_95f2_14238b7cf627.slice/crio-7567a4cc2b2b0b8662ae3773ce860ebd51800c1c259b37ba60a57d886720aa9c WatchSource:0}: Error finding container 7567a4cc2b2b0b8662ae3773ce860ebd51800c1c259b37ba60a57d886720aa9c: Status 404 returned error can't find the container with id 7567a4cc2b2b0b8662ae3773ce860ebd51800c1c259b37ba60a57d886720aa9c Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.265600 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.272935 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w"] Oct 14 10:11:51 crc kubenswrapper[4698]: W1014 10:11:51.281026 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e3e37b3_e0ed_479a_9124_aa6c814a1030.slice/crio-26e0542cae805609dbcf26c62e48e131f2b8a3e1e1fd422215ee883980033d47 WatchSource:0}: Error finding container 26e0542cae805609dbcf26c62e48e131f2b8a3e1e1fd422215ee883980033d47: Status 404 returned error can't find the container with id 26e0542cae805609dbcf26c62e48e131f2b8a3e1e1fd422215ee883980033d47 Oct 14 10:11:51 crc kubenswrapper[4698]: W1014 10:11:51.282180 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bdceb7a_7a1f_4c0b_a70d_787a610f1d3a.slice/crio-83cdf3dadf45f928c214f98606573102c60c0abfd7fe77f20eb0ad9830ac0298 WatchSource:0}: Error finding container 83cdf3dadf45f928c214f98606573102c60c0abfd7fe77f20eb0ad9830ac0298: Status 404 returned error can't find the container with id 83cdf3dadf45f928c214f98606573102c60c0abfd7fe77f20eb0ad9830ac0298 Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.282702 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv"] Oct 14 10:11:51 crc kubenswrapper[4698]: W1014 10:11:51.283231 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b97988_e327_4c7a_aab5_5985bf4a675d.slice/crio-720850f3a5b0f44157c868a4a99dce12dc2f51a1706ea7a51f9715ed24483013 WatchSource:0}: Error finding container 720850f3a5b0f44157c868a4a99dce12dc2f51a1706ea7a51f9715ed24483013: Status 404 returned error can't find the container with id 720850f3a5b0f44157c868a4a99dce12dc2f51a1706ea7a51f9715ed24483013 Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.287173 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7clgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-664664cb68-wdrpw_openstack-operators(4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.288872 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2t67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-869cc7797f-xpv9w_openstack-operators(3e3e37b3-e0ed-479a-9124-aa6c814a1030): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.288949 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f4ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f4d5dfdc6-7p5xj_openstack-operators(28b97988-e327-4c7a-aab5-5985bf4a675d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.311911 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79"] Oct 14 10:11:51 crc kubenswrapper[4698]: W1014 10:11:51.333958 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf62cd7_15b2_4bcc_aadd_1c982c7149e7.slice/crio-216087c0639a42571f55cb0546a76152b1caf066872fd0a0581af6e8f2009f9d WatchSource:0}: Error finding container 216087c0639a42571f55cb0546a76152b1caf066872fd0a0581af6e8f2009f9d: Status 404 returned error can't find the container with id 216087c0639a42571f55cb0546a76152b1caf066872fd0a0581af6e8f2009f9d Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.426989 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.450381 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf"] Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.496932 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bctvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-ffcdd6c94-g5fmn_openstack-operators(a969812c-8490-4e43-ab00-73c8254c5b21): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.497370 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mv8pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-zctmt_openstack-operators(052a38cb-bdfa-46de-ab53-e81b2f014b1d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.502798 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt"] Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.502874 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" podUID="052a38cb-bdfa-46de-ab53-e81b2f014b1d" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.524204 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq"] Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.529350 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc"] Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.531234 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qjxfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-578874c84d-xbvhq_openstack-operators(e93508a8-6ee5-4950-8cea-7c3599b7e1ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.533040 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qpl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc_openstack-operators(2486fbf6-b25f-4bc3-932d-5ade782da654): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.674622 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" event={"ID":"2486fbf6-b25f-4bc3-932d-5ade782da654","Type":"ContainerStarted","Data":"322b7d3fcd115e1f51b699d3368cd84c6bb4f60b014cdc3b827764e7c34a3ed2"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.676396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" event={"ID":"004d5489-901d-4fd3-9fc3-ae0016255950","Type":"ContainerStarted","Data":"9cd2645c030d5492e6c8b7c7772c7da6e05a42062c86749912d7111373add83a"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.677935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" event={"ID":"a969812c-8490-4e43-ab00-73c8254c5b21","Type":"ContainerStarted","Data":"7bd3c0bdbff595830451013a7eb863036d62b3db8bd100fd3e4c4467667fb7fb"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.679494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" event={"ID":"f55ae8f2-2a7c-4158-b125-2121c37fc874","Type":"ContainerStarted","Data":"7a8767ec762964177ba9da052a5ac04abde3b204697ff3a03bfb85b07fe40dd7"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.680619 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" event={"ID":"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7","Type":"ContainerStarted","Data":"916c10e27e89d1b3c028da96f7181421893b04c1163d927254e8e272dabeb12f"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.681821 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" event={"ID":"442ecb91-0479-42a8-94ba-5be7d8cea79f","Type":"ContainerStarted","Data":"64ff0afdcc7c0a17c4613c70fb1020bc00fc0eeaf6501b5d78c2859c4c70dd1a"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.689264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" event={"ID":"5b887a4b-1049-4b80-8613-89ef2f446df4","Type":"ContainerStarted","Data":"0c6a1bae9056fe8be669826a1b9b8746bc0d8a8b80227b0b4045645e3d534ef1"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.694229 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" event={"ID":"3e3e37b3-e0ed-479a-9124-aa6c814a1030","Type":"ContainerStarted","Data":"26e0542cae805609dbcf26c62e48e131f2b8a3e1e1fd422215ee883980033d47"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.695742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" event={"ID":"d6a101ad-e350-4964-a786-91072a6776e8","Type":"ContainerStarted","Data":"31448eec19960a6b1b35df1f66632dac9fbdd2e6daa5b86408e04fa277021c48"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.696762 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" event={"ID":"28b97988-e327-4c7a-aab5-5985bf4a675d","Type":"ContainerStarted","Data":"720850f3a5b0f44157c868a4a99dce12dc2f51a1706ea7a51f9715ed24483013"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.700085 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" event={"ID":"6d1a4e09-e83d-4634-ae32-b37666d65f61","Type":"ContainerStarted","Data":"ffb2e55e89dda1c64c9745db8c02ee50682054e6d77f047b5b520cb9423ec704"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.701587 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" event={"ID":"052a38cb-bdfa-46de-ab53-e81b2f014b1d","Type":"ContainerStarted","Data":"d6f1e3404d1b0720ec89da9fd879d5a719d2da1983863d550ad3a517c8aac7e1"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.704137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" event={"ID":"2547a997-b2ba-4300-92ed-09ccc57499c7","Type":"ContainerStarted","Data":"4ad3a874c2ad926b1fba345550c03b938ef4a4cfda58219659c78a6f4062b00c"} Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.704883 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" podUID="052a38cb-bdfa-46de-ab53-e81b2f014b1d" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.709406 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" event={"ID":"0660342f-b230-41a7-a2f8-44cd75696095","Type":"ContainerStarted","Data":"d956d0526a8b42c87aefcd1ce87554852e63c9cbc6f46a626a1a8265df433eae"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.713932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" event={"ID":"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7","Type":"ContainerStarted","Data":"216087c0639a42571f55cb0546a76152b1caf066872fd0a0581af6e8f2009f9d"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.716036 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" event={"ID":"e93508a8-6ee5-4950-8cea-7c3599b7e1ec","Type":"ContainerStarted","Data":"298f4bd8eb5874cdb5beeaa03e4a4f8075e294b5abed35198a1beed6a6abe57b"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.717634 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" event={"ID":"3482400d-0e9f-4dc5-883f-36313dc33944","Type":"ContainerStarted","Data":"76bab37bb24049197abf305bd565f8da769ea5df9d246f3101d5348b15af452d"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.731247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" event={"ID":"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044","Type":"ContainerStarted","Data":"23dec8c4cdd21378d54954fe07c5ccfcab872cfa25dd61d2342469d63786f8e2"} Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.736877 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" podUID="4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.738289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" event={"ID":"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a","Type":"ContainerStarted","Data":"83cdf3dadf45f928c214f98606573102c60c0abfd7fe77f20eb0ad9830ac0298"} Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.743473 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" event={"ID":"b310d6c3-527e-4a58-bc98-edcd7731b9e3","Type":"ContainerStarted","Data":"6c27b78a131f7476f5fd7bb9a6c55a540fee9193ccb405436705fdcd7f72654f"} Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.744656 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" podUID="a969812c-8490-4e43-ab00-73c8254c5b21" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.747011 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" podUID="3e3e37b3-e0ed-479a-9124-aa6c814a1030" Oct 14 10:11:51 crc kubenswrapper[4698]: I1014 10:11:51.748870 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" event={"ID":"cca3d0fd-d9aa-428f-95f2-14238b7cf627","Type":"ContainerStarted","Data":"7567a4cc2b2b0b8662ae3773ce860ebd51800c1c259b37ba60a57d886720aa9c"} Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.757898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" podUID="e93508a8-6ee5-4950-8cea-7c3599b7e1ec" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.784242 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" podUID="2486fbf6-b25f-4bc3-932d-5ade782da654" Oct 14 10:11:51 crc kubenswrapper[4698]: E1014 10:11:51.828931 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" podUID="28b97988-e327-4c7a-aab5-5985bf4a675d" Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.767396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" event={"ID":"3e3e37b3-e0ed-479a-9124-aa6c814a1030","Type":"ContainerStarted","Data":"5900a7d3140f53cc0a3c2cfa55fbe1ff0727ea9e27aa4a2a4b62f71706582a65"} Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.770334 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" event={"ID":"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a","Type":"ContainerStarted","Data":"0f78de20860aa252013d3dfd00d941980d8d8eb2b7953207854bb8abff4f3129"} Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.770891 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" podUID="3e3e37b3-e0ed-479a-9124-aa6c814a1030" Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.779604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" event={"ID":"2486fbf6-b25f-4bc3-932d-5ade782da654","Type":"ContainerStarted","Data":"64d0e85d1f824b701c2ee815e05de0f03eee2820eae81ff4edd772c6e01872ec"} Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.781349 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" podUID="4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a" Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.782008 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" event={"ID":"28b97988-e327-4c7a-aab5-5985bf4a675d","Type":"ContainerStarted","Data":"d9a022b0e93d8a2f954b14ee1c6f09d53c5ee86ceb6fc6d8cea49f434d400df1"} Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.783860 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" podUID="28b97988-e327-4c7a-aab5-5985bf4a675d" Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.789987 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" podUID="2486fbf6-b25f-4bc3-932d-5ade782da654" Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.802356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" event={"ID":"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7","Type":"ContainerStarted","Data":"c45b1ceba2bfc3266ed677dd565815fd7ceee7634c32e6538beb37488a5fb8ed"} Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.805806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" event={"ID":"e93508a8-6ee5-4950-8cea-7c3599b7e1ec","Type":"ContainerStarted","Data":"581f08a8bf4e830dd907f7493058fd67bdd889e0fb1e8e9754dd6196943b28a1"} Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.822915 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" podUID="e93508a8-6ee5-4950-8cea-7c3599b7e1ec" Oct 14 10:11:52 crc kubenswrapper[4698]: I1014 10:11:52.831624 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" event={"ID":"a969812c-8490-4e43-ab00-73c8254c5b21","Type":"ContainerStarted","Data":"70f898b67a8af0a036d880a6bc9d7180646dd32215c88d7619c3004c0527a628"} Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.839306 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" podUID="052a38cb-bdfa-46de-ab53-e81b2f014b1d" Oct 14 10:11:52 crc kubenswrapper[4698]: E1014 10:11:52.840069 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" podUID="a969812c-8490-4e43-ab00-73c8254c5b21" Oct 14 10:11:53 crc kubenswrapper[4698]: I1014 10:11:53.860506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" event={"ID":"ecf62cd7-15b2-4bcc-aadd-1c982c7149e7","Type":"ContainerStarted","Data":"8fc73e32ccb1698dd70b4a269fdd8d8af8de1669c43b82331d31fb2c7bbd8e28"} Oct 14 10:11:53 crc kubenswrapper[4698]: I1014 10:11:53.860564 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.864608 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" podUID="3e3e37b3-e0ed-479a-9124-aa6c814a1030" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.864688 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff\\\"\"" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" podUID="4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.864744 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" podUID="a969812c-8490-4e43-ab00-73c8254c5b21" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.864854 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" podUID="e93508a8-6ee5-4950-8cea-7c3599b7e1ec" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.864905 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" podUID="2486fbf6-b25f-4bc3-932d-5ade782da654" Oct 14 10:11:53 crc kubenswrapper[4698]: E1014 10:11:53.865397 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" podUID="28b97988-e327-4c7a-aab5-5985bf4a675d" Oct 14 10:11:53 crc kubenswrapper[4698]: I1014 10:11:53.918943 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" podStartSLOduration=4.918894986 podStartE2EDuration="4.918894986s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:11:53.918089263 +0000 UTC m=+895.615388699" watchObservedRunningTime="2025-10-14 10:11:53.918894986 +0000 UTC m=+895.616194402" Oct 14 10:12:00 crc kubenswrapper[4698]: I1014 10:12:00.905942 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-768cc76f8b-7jr79" Oct 14 10:12:03 crc kubenswrapper[4698]: I1014 10:12:03.963658 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" event={"ID":"b310d6c3-527e-4a58-bc98-edcd7731b9e3","Type":"ContainerStarted","Data":"159c389671e3e63cd76da990f18230c400f1b547f60b918bd162b510eae0c6ca"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.044903 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" event={"ID":"f91fec87-379e-4c52-9d03-b56841232184","Type":"ContainerStarted","Data":"0c78580676ef94f16fe33a8162f81364589959bf00e39aa0853a9e5f77685501"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.070797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" event={"ID":"24d6e9c5-aad5-4856-a7b7-20e04553c864","Type":"ContainerStarted","Data":"29aa26f4fd4bcf292be700839673610cb9a437e0c818263e0db71d12de1645b1"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.070947 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" event={"ID":"24d6e9c5-aad5-4856-a7b7-20e04553c864","Type":"ContainerStarted","Data":"a01e3e73d01e464f9dd4be19d33a01a4630a8e8425fe955ab2a32ae092dc55b2"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.071504 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.093117 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" event={"ID":"442ecb91-0479-42a8-94ba-5be7d8cea79f","Type":"ContainerStarted","Data":"e15b6827bfa42fca165a05f5244e77a694085a921181d7f82de36c95ed6cbbd8"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.131916 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" event={"ID":"004d5489-901d-4fd3-9fc3-ae0016255950","Type":"ContainerStarted","Data":"05c34a9211f8a322923d9d2b8e5a8c6d90900529187a387f6e4d5c3344371cd6"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.131985 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" event={"ID":"004d5489-901d-4fd3-9fc3-ae0016255950","Type":"ContainerStarted","Data":"c428c7121a748cdf59be8ba8ed3a9229c9c295dd41af88e9b1befea26d2459aa"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.132995 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.145802 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" podStartSLOduration=2.941573532 podStartE2EDuration="16.145783689s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.432290833 +0000 UTC m=+892.129590249" lastFinishedPulling="2025-10-14 10:12:03.63650095 +0000 UTC m=+905.333800406" observedRunningTime="2025-10-14 10:12:05.143594717 +0000 UTC m=+906.840894143" watchObservedRunningTime="2025-10-14 10:12:05.145783689 +0000 UTC m=+906.843083095" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.146185 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" event={"ID":"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044","Type":"ContainerStarted","Data":"488dd845d3b433e0a3c6d7398c4f933c722cedf29ced43103cd99f87a0ae3b3d"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.168976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" event={"ID":"f55ae8f2-2a7c-4158-b125-2121c37fc874","Type":"ContainerStarted","Data":"31957f3c178988cfb7a357f68d876629ae70c8216b434c7d10e600790a96e161"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.169025 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" event={"ID":"f55ae8f2-2a7c-4158-b125-2121c37fc874","Type":"ContainerStarted","Data":"8360aec5ed58ebd7e3e473202a75e3d3c25dede376ed311d44ffb9803211bc27"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.169671 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.185941 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" event={"ID":"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7","Type":"ContainerStarted","Data":"84ea1dffa6fef742c47722afec8cd30cf9453fc9c1feecff185358b65ba8cd4e"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.186805 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" podStartSLOduration=3.468109067 podStartE2EDuration="16.186793443s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.83875637 +0000 UTC m=+892.536055786" lastFinishedPulling="2025-10-14 10:12:03.557440746 +0000 UTC m=+905.254740162" observedRunningTime="2025-10-14 10:12:05.186624318 +0000 UTC m=+906.883923734" watchObservedRunningTime="2025-10-14 10:12:05.186793443 +0000 UTC m=+906.884092859" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.205033 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" event={"ID":"cca3d0fd-d9aa-428f-95f2-14238b7cf627","Type":"ContainerStarted","Data":"2e149361de054de79e7be0676d6b842ee5f78d50fe727872b87dd345e039ed89"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.223172 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" event={"ID":"0660342f-b230-41a7-a2f8-44cd75696095","Type":"ContainerStarted","Data":"4404dded1dac51d881d1f2cd7f3b3df9dcee8fb7a3893f0394a3f5cf2fba8245"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.224237 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.246965 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" event={"ID":"5b887a4b-1049-4b80-8613-89ef2f446df4","Type":"ContainerStarted","Data":"10c204f08902f725f39aa5b75c529dddef068cfb85137ec1933f908ce509e4a1"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.247447 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.263835 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" podStartSLOduration=3.8727138009999997 podStartE2EDuration="16.263817439s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.245358421 +0000 UTC m=+892.942657837" lastFinishedPulling="2025-10-14 10:12:03.636462049 +0000 UTC m=+905.333761475" observedRunningTime="2025-10-14 10:12:05.259180998 +0000 UTC m=+906.956480434" watchObservedRunningTime="2025-10-14 10:12:05.263817439 +0000 UTC m=+906.961116855" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.264271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" event={"ID":"3482400d-0e9f-4dc5-883f-36313dc33944","Type":"ContainerStarted","Data":"a40dba85f2e86f6f34ad0d651362c534f24757a5527fb0c0fd9c948829b4b6d0"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.264351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" event={"ID":"3482400d-0e9f-4dc5-883f-36313dc33944","Type":"ContainerStarted","Data":"2fa80ec7aaf8f5cc90195ae8b43bb7dd49142f0f69dcc58fc18d07d4345ca327"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.265527 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.277604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" event={"ID":"d6a101ad-e350-4964-a786-91072a6776e8","Type":"ContainerStarted","Data":"05d7616114821ac4143e152c0b5d28259053ead712f2cb50ab5408903568e274"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.277668 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" event={"ID":"d6a101ad-e350-4964-a786-91072a6776e8","Type":"ContainerStarted","Data":"e7b6935f300c94c44db211af21b6a59f5d77ef28f9749782fea07d1118f5dca2"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.278609 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.293116 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" event={"ID":"b310d6c3-527e-4a58-bc98-edcd7731b9e3","Type":"ContainerStarted","Data":"e4f99d9855b8643a3910bc3dd38d540ee6f79fca37b4feaa05f15a4ffa193bb5"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.293936 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.313335 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" event={"ID":"6d1a4e09-e83d-4634-ae32-b37666d65f61","Type":"ContainerStarted","Data":"437785420b25cf6527c15f4a532449427240d1c67a9dcb3d2c916a00b685fb36"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.314196 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.326661 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" event={"ID":"2547a997-b2ba-4300-92ed-09ccc57499c7","Type":"ContainerStarted","Data":"f385b40e5e44cf81f584d1fd79613584c2246add57d65a317ec060a6e15bc450"} Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.387324 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" podStartSLOduration=3.998556373 podStartE2EDuration="16.387303004s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.247750909 +0000 UTC m=+892.945050325" lastFinishedPulling="2025-10-14 10:12:03.63649754 +0000 UTC m=+905.333796956" observedRunningTime="2025-10-14 10:12:05.348268406 +0000 UTC m=+907.045567852" watchObservedRunningTime="2025-10-14 10:12:05.387303004 +0000 UTC m=+907.084602420" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.391074 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" podStartSLOduration=4.2376518 podStartE2EDuration="16.391057671s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.485727974 +0000 UTC m=+893.183027390" lastFinishedPulling="2025-10-14 10:12:03.639133845 +0000 UTC m=+905.336433261" observedRunningTime="2025-10-14 10:12:05.386182102 +0000 UTC m=+907.083481518" watchObservedRunningTime="2025-10-14 10:12:05.391057671 +0000 UTC m=+907.088357087" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.418140 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" podStartSLOduration=3.482885927 podStartE2EDuration="16.418121459s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.701233887 +0000 UTC m=+892.398533303" lastFinishedPulling="2025-10-14 10:12:03.636469409 +0000 UTC m=+905.333768835" observedRunningTime="2025-10-14 10:12:05.415200516 +0000 UTC m=+907.112499932" watchObservedRunningTime="2025-10-14 10:12:05.418121459 +0000 UTC m=+907.115420885" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.504935 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" podStartSLOduration=4.018412017 podStartE2EDuration="16.504920873s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.146045522 +0000 UTC m=+892.843344938" lastFinishedPulling="2025-10-14 10:12:03.632554348 +0000 UTC m=+905.329853794" observedRunningTime="2025-10-14 10:12:05.444893589 +0000 UTC m=+907.142193015" watchObservedRunningTime="2025-10-14 10:12:05.504920873 +0000 UTC m=+907.202220289" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.556042 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" podStartSLOduration=4.176485954 podStartE2EDuration="16.556017593s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.258596937 +0000 UTC m=+892.955896353" lastFinishedPulling="2025-10-14 10:12:03.638128576 +0000 UTC m=+905.335427992" observedRunningTime="2025-10-14 10:12:05.555513019 +0000 UTC m=+907.252812435" watchObservedRunningTime="2025-10-14 10:12:05.556017593 +0000 UTC m=+907.253316999" Oct 14 10:12:05 crc kubenswrapper[4698]: I1014 10:12:05.559380 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" podStartSLOduration=3.62040739 podStartE2EDuration="16.559365928s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.700844026 +0000 UTC m=+892.398143442" lastFinishedPulling="2025-10-14 10:12:03.639802544 +0000 UTC m=+905.337101980" observedRunningTime="2025-10-14 10:12:05.510626225 +0000 UTC m=+907.207925661" watchObservedRunningTime="2025-10-14 10:12:05.559365928 +0000 UTC m=+907.256665344" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.341018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" event={"ID":"0660342f-b230-41a7-a2f8-44cd75696095","Type":"ContainerStarted","Data":"a1dc1c652c907bba7f30f91348168eedf2512ec8a65f7f985d16c088e1adb519"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.345695 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" event={"ID":"6d1a4e09-e83d-4634-ae32-b37666d65f61","Type":"ContainerStarted","Data":"6b637832ef40a330d3f896eb56bb9ebd1b4d60b3d560dcbbcbbb140cd6b0aea5"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.348634 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" event={"ID":"5b887a4b-1049-4b80-8613-89ef2f446df4","Type":"ContainerStarted","Data":"01b93eef4dbae134232f9d36531ab961f9f9f345c88fdb2e37ce15bfc1d088f9"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.352252 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" event={"ID":"2547a997-b2ba-4300-92ed-09ccc57499c7","Type":"ContainerStarted","Data":"f5e6ad267acf16a655b62e5b5e462e705353af705539d1b112e26effb925eaab"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.353395 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.355396 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" event={"ID":"4ea0ebfe-fbe9-428c-baf6-565e4dbb9044","Type":"ContainerStarted","Data":"fc40faa2d2e77fe82cd6afc2f6689f6bb77d91354722a17c7a72f76026dea06c"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.356026 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.359901 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" event={"ID":"cca3d0fd-d9aa-428f-95f2-14238b7cf627","Type":"ContainerStarted","Data":"1841f18e1840382e15795b80ca3eefda70da8a48cbe4446a3f5a3cac7d7e3a69"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.360498 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.364318 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" event={"ID":"442ecb91-0479-42a8-94ba-5be7d8cea79f","Type":"ContainerStarted","Data":"4cd3d0de67118b9ca43f0ef632fe5b9eabf874a1f6088284d80a1a954724d54d"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.364444 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.367469 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" event={"ID":"84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7","Type":"ContainerStarted","Data":"545655bd7bd36665ca939d438531a78f3dac7df1180e9a4f401b73740bbcc5bf"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.368628 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.382726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" event={"ID":"f91fec87-379e-4c52-9d03-b56841232184","Type":"ContainerStarted","Data":"893e013101ee6d8141454f938c226281a2e77d1451be3eeb33ce89f567e2466b"} Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.382831 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.389514 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" podStartSLOduration=4.994286227 podStartE2EDuration="17.389495821s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.162229112 +0000 UTC m=+892.859528528" lastFinishedPulling="2025-10-14 10:12:03.557438716 +0000 UTC m=+905.254738122" observedRunningTime="2025-10-14 10:12:06.369156963 +0000 UTC m=+908.066456399" watchObservedRunningTime="2025-10-14 10:12:06.389495821 +0000 UTC m=+908.086795237" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.395431 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" podStartSLOduration=4.728613005 podStartE2EDuration="17.395410838s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.972647771 +0000 UTC m=+892.669947187" lastFinishedPulling="2025-10-14 10:12:03.639445584 +0000 UTC m=+905.336745020" observedRunningTime="2025-10-14 10:12:06.390174 +0000 UTC m=+908.087473426" watchObservedRunningTime="2025-10-14 10:12:06.395410838 +0000 UTC m=+908.092710244" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.424290 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" podStartSLOduration=5.06278171 podStartE2EDuration="17.424252947s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.271080971 +0000 UTC m=+892.968380387" lastFinishedPulling="2025-10-14 10:12:03.632552198 +0000 UTC m=+905.329851624" observedRunningTime="2025-10-14 10:12:06.41943405 +0000 UTC m=+908.116733476" watchObservedRunningTime="2025-10-14 10:12:06.424252947 +0000 UTC m=+908.121552363" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.446556 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" podStartSLOduration=5.075183473 podStartE2EDuration="17.44653916s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.265878604 +0000 UTC m=+892.963178020" lastFinishedPulling="2025-10-14 10:12:03.637234291 +0000 UTC m=+905.334533707" observedRunningTime="2025-10-14 10:12:06.444364048 +0000 UTC m=+908.141663484" watchObservedRunningTime="2025-10-14 10:12:06.44653916 +0000 UTC m=+908.143838566" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.472505 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" podStartSLOduration=5.060690102 podStartE2EDuration="17.472469496s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.256816417 +0000 UTC m=+892.954115833" lastFinishedPulling="2025-10-14 10:12:03.668595811 +0000 UTC m=+905.365895227" observedRunningTime="2025-10-14 10:12:06.465352964 +0000 UTC m=+908.162652390" watchObservedRunningTime="2025-10-14 10:12:06.472469496 +0000 UTC m=+908.169768912" Oct 14 10:12:06 crc kubenswrapper[4698]: I1014 10:12:06.498470 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" podStartSLOduration=4.431697948 podStartE2EDuration="17.498445273s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:50.575675933 +0000 UTC m=+892.272975349" lastFinishedPulling="2025-10-14 10:12:03.642423248 +0000 UTC m=+905.339722674" observedRunningTime="2025-10-14 10:12:06.493161393 +0000 UTC m=+908.190460809" watchObservedRunningTime="2025-10-14 10:12:06.498445273 +0000 UTC m=+908.195744689" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.454523 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-5wlw6" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.494720 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nh6c5" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.497464 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-nh4nc" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.534184 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-f6jr7" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.569277 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-52h2t" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.575315 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-nq8vb" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.584251 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-nds58" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.620284 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-d9qkb" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.643251 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-ks5tw" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.864354 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59578bc799-nb7fk" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.908309 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-kmzfd" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.932941 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-c49pm" Oct 14 10:12:09 crc kubenswrapper[4698]: I1014 10:12:09.942338 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-zfmdv" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.193634 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2jvxv" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.283739 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-646675d848-n4xkf" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.423579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" event={"ID":"a969812c-8490-4e43-ab00-73c8254c5b21","Type":"ContainerStarted","Data":"985122a8eb16054459edc1c51215de7f7cb0ca73a7c952f584cd824bad77f78b"} Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.423826 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.430480 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" event={"ID":"3e3e37b3-e0ed-479a-9124-aa6c814a1030","Type":"ContainerStarted","Data":"2354f3749a966bb8956602684b8804e338c8f7e90aebfc44d9c4991b476e9c58"} Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.430674 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.434233 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" event={"ID":"4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a","Type":"ContainerStarted","Data":"167cac963fb8975d62b7b2974d8c29fd1ce48625e450e579b15cfb708e10da6e"} Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.434814 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.437483 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" event={"ID":"28b97988-e327-4c7a-aab5-5985bf4a675d","Type":"ContainerStarted","Data":"8b29d6ec8ca92ce2216abc903a75465b123dbdfa03d71a94b2425923804e1300"} Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.437847 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.447642 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" podStartSLOduration=3.7141106 podStartE2EDuration="21.447622252s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.496722916 +0000 UTC m=+893.194022332" lastFinishedPulling="2025-10-14 10:12:09.230234568 +0000 UTC m=+910.927533984" observedRunningTime="2025-10-14 10:12:10.446634864 +0000 UTC m=+912.143934280" watchObservedRunningTime="2025-10-14 10:12:10.447622252 +0000 UTC m=+912.144921678" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.465848 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" podStartSLOduration=3.519434343 podStartE2EDuration="21.465830169s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.286995473 +0000 UTC m=+892.984294889" lastFinishedPulling="2025-10-14 10:12:09.233391299 +0000 UTC m=+910.930690715" observedRunningTime="2025-10-14 10:12:10.465800508 +0000 UTC m=+912.163099934" watchObservedRunningTime="2025-10-14 10:12:10.465830169 +0000 UTC m=+912.163129585" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.489507 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" podStartSLOduration=3.549966876 podStartE2EDuration="21.489489664s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.288710202 +0000 UTC m=+892.986009618" lastFinishedPulling="2025-10-14 10:12:09.22823299 +0000 UTC m=+910.925532406" observedRunningTime="2025-10-14 10:12:10.486062724 +0000 UTC m=+912.183362140" watchObservedRunningTime="2025-10-14 10:12:10.489489664 +0000 UTC m=+912.186789080" Oct 14 10:12:10 crc kubenswrapper[4698]: I1014 10:12:10.509123 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" podStartSLOduration=3.549806381 podStartE2EDuration="21.509104151s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.288741603 +0000 UTC m=+892.986041019" lastFinishedPulling="2025-10-14 10:12:09.248039373 +0000 UTC m=+910.945338789" observedRunningTime="2025-10-14 10:12:10.505862597 +0000 UTC m=+912.203162013" watchObservedRunningTime="2025-10-14 10:12:10.509104151 +0000 UTC m=+912.206403567" Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.462949 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" event={"ID":"2486fbf6-b25f-4bc3-932d-5ade782da654","Type":"ContainerStarted","Data":"3f8ed5c5f52c7e24c1971a114b0c16602cb0849ce1cfe8830c708c626b99ef87"} Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.463690 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.465606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" event={"ID":"052a38cb-bdfa-46de-ab53-e81b2f014b1d","Type":"ContainerStarted","Data":"47b61ea8501f5e0cee2d3251eb8359000d381ef9ee514cce3bfac565d318822d"} Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.468532 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" event={"ID":"e93508a8-6ee5-4950-8cea-7c3599b7e1ec","Type":"ContainerStarted","Data":"1d71b4ebef3657c648a5995bdbfcf272aafc5e37d90bba61f0628cb0b749ef01"} Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.468861 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.501656 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" podStartSLOduration=3.5511105450000002 podStartE2EDuration="24.501633918s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.531123342 +0000 UTC m=+893.228422748" lastFinishedPulling="2025-10-14 10:12:12.481646695 +0000 UTC m=+914.178946121" observedRunningTime="2025-10-14 10:12:13.495455079 +0000 UTC m=+915.192754495" watchObservedRunningTime="2025-10-14 10:12:13.501633918 +0000 UTC m=+915.198933334" Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.538409 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" podStartSLOduration=3.582936646 podStartE2EDuration="24.538391611s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.53102706 +0000 UTC m=+893.228326476" lastFinishedPulling="2025-10-14 10:12:12.486482025 +0000 UTC m=+914.183781441" observedRunningTime="2025-10-14 10:12:13.522485351 +0000 UTC m=+915.219784787" watchObservedRunningTime="2025-10-14 10:12:13.538391611 +0000 UTC m=+915.235691027" Oct 14 10:12:13 crc kubenswrapper[4698]: I1014 10:12:13.541015 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zctmt" podStartSLOduration=3.537931683 podStartE2EDuration="24.541006287s" podCreationTimestamp="2025-10-14 10:11:49 +0000 UTC" firstStartedPulling="2025-10-14 10:11:51.49720706 +0000 UTC m=+893.194506476" lastFinishedPulling="2025-10-14 10:12:12.500281664 +0000 UTC m=+914.197581080" observedRunningTime="2025-10-14 10:12:13.538386531 +0000 UTC m=+915.235685967" watchObservedRunningTime="2025-10-14 10:12:13.541006287 +0000 UTC m=+915.238305703" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.059875 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-xpv9w" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.085404 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-664664cb68-wdrpw" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.099739 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-7p5xj" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.186943 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-xbvhq" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.255464 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-g5fmn" Oct 14 10:12:20 crc kubenswrapper[4698]: I1014 10:12:20.596692 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.143867 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.152895 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.157982 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.159634 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zsn6l" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.162524 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.222839 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.224286 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.226212 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.231744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.231802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k42\" (UniqueName: \"kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.239162 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.333214 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.333278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.333362 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6df7k\" (UniqueName: \"kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.333669 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.333752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k42\" (UniqueName: \"kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.334728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.354868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k42\" (UniqueName: \"kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42\") pod \"dnsmasq-dns-675f4bcbfc-pk47x\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.434937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.435476 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.435522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6df7k\" (UniqueName: \"kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.435888 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.436398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.454887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6df7k\" (UniqueName: \"kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k\") pod \"dnsmasq-dns-78dd6ddcc-mxrtr\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.473363 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:36 crc kubenswrapper[4698]: I1014 10:12:36.542225 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:37 crc kubenswrapper[4698]: I1014 10:12:37.016607 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:37 crc kubenswrapper[4698]: I1014 10:12:37.071643 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:37 crc kubenswrapper[4698]: W1014 10:12:37.074680 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6acfbc9a_615e_4529_b6bb_3aceb87f9ca3.slice/crio-cf622e93e278703b09fe02a7c043dca1c0efd6ab0a4c04f09f45bdfb1dc1618f WatchSource:0}: Error finding container cf622e93e278703b09fe02a7c043dca1c0efd6ab0a4c04f09f45bdfb1dc1618f: Status 404 returned error can't find the container with id cf622e93e278703b09fe02a7c043dca1c0efd6ab0a4c04f09f45bdfb1dc1618f Oct 14 10:12:37 crc kubenswrapper[4698]: I1014 10:12:37.728926 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" event={"ID":"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3","Type":"ContainerStarted","Data":"cf622e93e278703b09fe02a7c043dca1c0efd6ab0a4c04f09f45bdfb1dc1618f"} Oct 14 10:12:37 crc kubenswrapper[4698]: I1014 10:12:37.733108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" event={"ID":"814b126e-45fe-4be9-90f6-e95380af0957","Type":"ContainerStarted","Data":"0a4696cee387029a547a844dc83b1f1921ba37b2d5e001231becbf3b6fe99890"} Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.041200 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.072613 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.074731 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.116927 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.185845 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sbw\" (UniqueName: \"kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.185923 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.186011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.287455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sbw\" (UniqueName: \"kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.287559 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.287675 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.288780 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.288958 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.310093 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sbw\" (UniqueName: \"kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw\") pod \"dnsmasq-dns-666b6646f7-ldb47\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.365368 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.388250 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.389642 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.400151 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.408050 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.492825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.493090 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.493119 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zqt\" (UniqueName: \"kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.593983 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.594037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zqt\" (UniqueName: \"kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.594075 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.596282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.603436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.625392 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zqt\" (UniqueName: \"kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt\") pod \"dnsmasq-dns-57d769cc4f-pmpg8\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:39 crc kubenswrapper[4698]: I1014 10:12:39.708149 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.003055 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:12:40 crc kubenswrapper[4698]: W1014 10:12:40.016680 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f445316_4106_46e7_9302_0d79c368a6db.slice/crio-21434ebb1f4c12a2b495b54e5be0116afdc37b2498543ea5587859ef1308d6ec WatchSource:0}: Error finding container 21434ebb1f4c12a2b495b54e5be0116afdc37b2498543ea5587859ef1308d6ec: Status 404 returned error can't find the container with id 21434ebb1f4c12a2b495b54e5be0116afdc37b2498543ea5587859ef1308d6ec Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.029006 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:40 crc kubenswrapper[4698]: W1014 10:12:40.036959 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcd39f3_8368_43b1_beaa_ec7e75468bad.slice/crio-9c0e4231651c1d36f18775534eef2ea8429bbe8c00f52f8c7fb9b7257706e152 WatchSource:0}: Error finding container 9c0e4231651c1d36f18775534eef2ea8429bbe8c00f52f8c7fb9b7257706e152: Status 404 returned error can't find the container with id 9c0e4231651c1d36f18775534eef2ea8429bbe8c00f52f8c7fb9b7257706e152 Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.218723 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.240744 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.240844 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.248548 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249038 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249289 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249453 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249596 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249840 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.249996 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vczn6" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317402 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317463 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317496 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjg2\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317588 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317624 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.317657 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418785 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418820 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418844 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418869 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418892 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418949 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418968 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.418987 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjg2\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.420834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.422112 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.422856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.423605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.423742 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.426516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.429874 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.434061 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.438730 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjg2\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.443047 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.450474 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.459326 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.516892 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.518803 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.525701 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xpvck" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.526449 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.526340 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.526924 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.527022 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.527405 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.528153 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.542908 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.606199 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621378 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621708 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621813 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621838 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621865 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.621999 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.622059 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxbs\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725492 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725587 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725611 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725632 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725658 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725686 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.725781 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxbs\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.729405 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.731507 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.733042 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.733440 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.733686 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.734615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.738068 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.738852 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.743244 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.749898 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.750760 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxbs\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.762278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.775078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" event={"ID":"8f445316-4106-46e7-9302-0d79c368a6db","Type":"ContainerStarted","Data":"21434ebb1f4c12a2b495b54e5be0116afdc37b2498543ea5587859ef1308d6ec"} Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.776939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" event={"ID":"ffcd39f3-8368-43b1-beaa-ec7e75468bad","Type":"ContainerStarted","Data":"9c0e4231651c1d36f18775534eef2ea8429bbe8c00f52f8c7fb9b7257706e152"} Oct 14 10:12:40 crc kubenswrapper[4698]: I1014 10:12:40.851045 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.002218 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.004028 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.006013 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vwdw4" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.014088 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.016757 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.018013 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.018033 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.018756 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.024569 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-operator-scripts\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153255 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90244b70-b4fa-4b40-a962-119168333566-config-data-generated\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153521 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfg92\" (UniqueName: \"kubernetes.io/projected/90244b70-b4fa-4b40-a962-119168333566-kube-api-access-dfg92\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153659 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-config-data-default\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153723 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-kolla-config\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.153809 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-secrets\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.254357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255235 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-kolla-config\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255284 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-secrets\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255305 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-operator-scripts\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255388 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90244b70-b4fa-4b40-a962-119168333566-config-data-generated\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255437 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255462 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfg92\" (UniqueName: \"kubernetes.io/projected/90244b70-b4fa-4b40-a962-119168333566-kube-api-access-dfg92\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.255542 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-config-data-default\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.256117 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.256201 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90244b70-b4fa-4b40-a962-119168333566-config-data-generated\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.258158 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-config-data-default\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.258361 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-kolla-config\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.259360 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.259411 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90244b70-b4fa-4b40-a962-119168333566-operator-scripts\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.260063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.260383 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/90244b70-b4fa-4b40-a962-119168333566-secrets\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.278813 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfg92\" (UniqueName: \"kubernetes.io/projected/90244b70-b4fa-4b40-a962-119168333566-kube-api-access-dfg92\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.301008 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"90244b70-b4fa-4b40-a962-119168333566\") " pod="openstack/openstack-galera-0" Oct 14 10:12:42 crc kubenswrapper[4698]: I1014 10:12:42.331281 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.339263 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.341336 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.346914 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tmxdb" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.347175 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.351990 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.352612 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.353491 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.389933 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.389994 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390023 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390088 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390165 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6xf\" (UniqueName: \"kubernetes.io/projected/9f3078d2-396d-4f2a-913f-b5c5555e568d-kube-api-access-ft6xf\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390258 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390292 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.390319 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.492043 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.492121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.492159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.492685 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.493542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.493592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.492231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.493687 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.493721 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6xf\" (UniqueName: \"kubernetes.io/projected/9f3078d2-396d-4f2a-913f-b5c5555e568d-kube-api-access-ft6xf\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.494265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.494304 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.494323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.495559 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.497000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f3078d2-396d-4f2a-913f-b5c5555e568d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.499513 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.504407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.506344 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9f3078d2-396d-4f2a-913f-b5c5555e568d-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.514431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6xf\" (UniqueName: \"kubernetes.io/projected/9f3078d2-396d-4f2a-913f-b5c5555e568d-kube-api-access-ft6xf\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.543219 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9f3078d2-396d-4f2a-913f-b5c5555e568d\") " pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.605188 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.606468 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.609136 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-w2rl2" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.609368 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.611322 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.661594 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.666866 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.697265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.697343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56q2x\" (UniqueName: \"kubernetes.io/projected/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kube-api-access-56q2x\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.697374 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.697399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-config-data\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.697464 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kolla-config\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.799583 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.799717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56q2x\" (UniqueName: \"kubernetes.io/projected/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kube-api-access-56q2x\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.799748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-config-data\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.799783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.799850 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kolla-config\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.801136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kolla-config\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.801249 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3bc7f78-d69f-426c-9aeb-4837d25635ab-config-data\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.806398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.810281 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3bc7f78-d69f-426c-9aeb-4837d25635ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.822994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56q2x\" (UniqueName: \"kubernetes.io/projected/e3bc7f78-d69f-426c-9aeb-4837d25635ab-kube-api-access-56q2x\") pod \"memcached-0\" (UID: \"e3bc7f78-d69f-426c-9aeb-4837d25635ab\") " pod="openstack/memcached-0" Oct 14 10:12:43 crc kubenswrapper[4698]: I1014 10:12:43.940625 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.644714 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.646785 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.652306 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g8kz4" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.668691 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.731751 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzq9\" (UniqueName: \"kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9\") pod \"kube-state-metrics-0\" (UID: \"6702faf6-e3b2-44f8-a033-ba5fd85af368\") " pod="openstack/kube-state-metrics-0" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.833102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzq9\" (UniqueName: \"kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9\") pod \"kube-state-metrics-0\" (UID: \"6702faf6-e3b2-44f8-a033-ba5fd85af368\") " pod="openstack/kube-state-metrics-0" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.855054 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzq9\" (UniqueName: \"kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9\") pod \"kube-state-metrics-0\" (UID: \"6702faf6-e3b2-44f8-a033-ba5fd85af368\") " pod="openstack/kube-state-metrics-0" Oct 14 10:12:45 crc kubenswrapper[4698]: I1014 10:12:45.970731 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.877891 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-24vqt"] Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.881130 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.883419 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.888989 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9vkl6" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.889176 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.891736 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-24vqt"] Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.919975 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2cb6b"] Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.922693 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:48 crc kubenswrapper[4698]: I1014 10:12:48.954087 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2cb6b"] Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.000272 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-log-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001071 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-combined-ca-bundle\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001218 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64163c4-e040-4bec-a585-c55f9d05e948-scripts\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001284 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-ovn-controller-tls-certs\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sg7h\" (UniqueName: \"kubernetes.io/projected/b64163c4-e040-4bec-a585-c55f9d05e948-kube-api-access-5sg7h\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.001410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-lib\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103442 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-run\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103507 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-log-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-etc-ovs\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103554 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-combined-ca-bundle\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103578 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-log\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103619 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64163c4-e040-4bec-a585-c55f9d05e948-scripts\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103649 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-scripts\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103704 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfkx\" (UniqueName: \"kubernetes.io/projected/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-kube-api-access-dwfkx\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-ovn-controller-tls-certs\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103745 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sg7h\" (UniqueName: \"kubernetes.io/projected/b64163c4-e040-4bec-a585-c55f9d05e948-kube-api-access-5sg7h\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.103791 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.104541 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.104683 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-run\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.104931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b64163c4-e040-4bec-a585-c55f9d05e948-var-log-ovn\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.109567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64163c4-e040-4bec-a585-c55f9d05e948-scripts\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.115388 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-combined-ca-bundle\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.136492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64163c4-e040-4bec-a585-c55f9d05e948-ovn-controller-tls-certs\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.137051 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sg7h\" (UniqueName: \"kubernetes.io/projected/b64163c4-e040-4bec-a585-c55f9d05e948-kube-api-access-5sg7h\") pod \"ovn-controller-24vqt\" (UID: \"b64163c4-e040-4bec-a585-c55f9d05e948\") " pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206133 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfkx\" (UniqueName: \"kubernetes.io/projected/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-kube-api-access-dwfkx\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-lib\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206264 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-run\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206329 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-etc-ovs\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-log\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206412 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-scripts\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206574 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-run\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206617 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-lib\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206745 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-etc-ovs\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.206891 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-var-log\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.211597 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-scripts\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.217458 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.224057 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfkx\" (UniqueName: \"kubernetes.io/projected/62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe-kube-api-access-dwfkx\") pod \"ovn-controller-ovs-2cb6b\" (UID: \"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe\") " pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.256060 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.371442 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.373406 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.376573 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.376829 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.376965 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.378800 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.378867 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.382124 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-d7r5x" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514404 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514480 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514512 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-config\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514564 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514582 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2vs\" (UniqueName: \"kubernetes.io/projected/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-kube-api-access-sk2vs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.514672 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616524 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616589 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616621 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2vs\" (UniqueName: \"kubernetes.io/projected/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-kube-api-access-sk2vs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616697 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616737 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616782 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.616806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-config\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.617239 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.617702 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.617729 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-config\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.619140 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.621332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.622697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.630682 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.638144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2vs\" (UniqueName: \"kubernetes.io/projected/884f9a07-9f80-44ff-a1e5-805d6d5ef6fb-kube-api-access-sk2vs\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.638295 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb\") " pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:49 crc kubenswrapper[4698]: I1014 10:12:49.710957 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.652224 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.653164 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6df7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mxrtr_openstack(6acfbc9a-615e-4529-b6bb-3aceb87f9ca3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.655023 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" podUID="6acfbc9a-615e-4529-b6bb-3aceb87f9ca3" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.790983 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.791677 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2k42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pk47x_openstack(814b126e-45fe-4be9-90f6-e95380af0957): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:12:52 crc kubenswrapper[4698]: E1014 10:12:52.793194 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" podUID="814b126e-45fe-4be9-90f6-e95380af0957" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.129178 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.131457 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda710709f_1c22_4fff_b329_6d446917af01.slice/crio-cacb6a3302f9917daab8ef528c2cbd9d6e2204ae718b2b2e946d0a545bf7a40d WatchSource:0}: Error finding container cacb6a3302f9917daab8ef528c2cbd9d6e2204ae718b2b2e946d0a545bf7a40d: Status 404 returned error can't find the container with id cacb6a3302f9917daab8ef528c2cbd9d6e2204ae718b2b2e946d0a545bf7a40d Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.185240 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.189990 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.193755 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.193988 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.194264 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gg2vs" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.194436 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.198566 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298564 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jswvj\" (UniqueName: \"kubernetes.io/projected/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-kube-api-access-jswvj\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298618 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298646 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298693 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298720 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298743 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298821 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.298854 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.354644 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400447 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6df7k\" (UniqueName: \"kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k\") pod \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400608 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config\") pod \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400630 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc\") pod \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\" (UID: \"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3\") " Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400852 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400954 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.400986 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jswvj\" (UniqueName: \"kubernetes.io/projected/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-kube-api-access-jswvj\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401070 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401558 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.401963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3" (UID: "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.402460 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.403446 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config" (OuterVolumeSpecName: "config") pod "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3" (UID: "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.403747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.407402 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.411264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.411273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.411687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.412049 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k" (OuterVolumeSpecName: "kube-api-access-6df7k") pod "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3" (UID: "6acfbc9a-615e-4529-b6bb-3aceb87f9ca3"). InnerVolumeSpecName "kube-api-access-6df7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.418415 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jswvj\" (UniqueName: \"kubernetes.io/projected/468f15c4-08a4-4e2e-a65d-7a679b1d3a3f-kube-api-access-jswvj\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.423242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f\") " pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.432798 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.504014 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6df7k\" (UniqueName: \"kubernetes.io/projected/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-kube-api-access-6df7k\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.504054 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.504066 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.558357 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.567155 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c8cdd03_2ef0_496f_8748_d1495be75e5f.slice/crio-4ec8400d779dd827fe40344717c9809c218e498a424def8f0c6350fb1d95fe72 WatchSource:0}: Error finding container 4ec8400d779dd827fe40344717c9809c218e498a424def8f0c6350fb1d95fe72: Status 404 returned error can't find the container with id 4ec8400d779dd827fe40344717c9809c218e498a424def8f0c6350fb1d95fe72 Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.584895 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.591400 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.714254 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config\") pod \"814b126e-45fe-4be9-90f6-e95380af0957\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.714417 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2k42\" (UniqueName: \"kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42\") pod \"814b126e-45fe-4be9-90f6-e95380af0957\" (UID: \"814b126e-45fe-4be9-90f6-e95380af0957\") " Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.716532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config" (OuterVolumeSpecName: "config") pod "814b126e-45fe-4be9-90f6-e95380af0957" (UID: "814b126e-45fe-4be9-90f6-e95380af0957"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.759101 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42" (OuterVolumeSpecName: "kube-api-access-h2k42") pod "814b126e-45fe-4be9-90f6-e95380af0957" (UID: "814b126e-45fe-4be9-90f6-e95380af0957"). InnerVolumeSpecName "kube-api-access-h2k42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.770878 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.816482 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-24vqt"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.818221 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814b126e-45fe-4be9-90f6-e95380af0957-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.818259 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2k42\" (UniqueName: \"kubernetes.io/projected/814b126e-45fe-4be9-90f6-e95380af0957-kube-api-access-h2k42\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.830154 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.835490 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.838323 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb64163c4_e040_4bec_a585_c55f9d05e948.slice/crio-d6119e17beb6e925500d683b24dd10f15ce27b609ff72e3dabe322a0bf81d5ad WatchSource:0}: Error finding container d6119e17beb6e925500d683b24dd10f15ce27b609ff72e3dabe322a0bf81d5ad: Status 404 returned error can't find the container with id d6119e17beb6e925500d683b24dd10f15ce27b609ff72e3dabe322a0bf81d5ad Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.840501 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f3078d2_396d_4f2a_913f_b5c5555e568d.slice/crio-b0741614d62dbc0477b28a27d15a3795e17bbc95126b36163264d300ade3181d WatchSource:0}: Error finding container b0741614d62dbc0477b28a27d15a3795e17bbc95126b36163264d300ade3181d: Status 404 returned error can't find the container with id b0741614d62dbc0477b28a27d15a3795e17bbc95126b36163264d300ade3181d Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.844110 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6702faf6_e3b2_44f8_a033_ba5fd85af368.slice/crio-6838ec9cc63bb63b152ad9e3706cacc4c43f2ff68c440a16808ae272e9331044 WatchSource:0}: Error finding container 6838ec9cc63bb63b152ad9e3706cacc4c43f2ff68c440a16808ae272e9331044: Status 404 returned error can't find the container with id 6838ec9cc63bb63b152ad9e3706cacc4c43f2ff68c440a16808ae272e9331044 Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.922733 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 10:12:53 crc kubenswrapper[4698]: W1014 10:12:53.930502 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884f9a07_9f80_44ff_a1e5_805d6d5ef6fb.slice/crio-38948c33c10ce8c7ec53b6c03956a4e7b383f736cda82205a716a7626edd9878 WatchSource:0}: Error finding container 38948c33c10ce8c7ec53b6c03956a4e7b383f736cda82205a716a7626edd9878: Status 404 returned error can't find the container with id 38948c33c10ce8c7ec53b6c03956a4e7b383f736cda82205a716a7626edd9878 Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.932859 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3bc7f78-d69f-426c-9aeb-4837d25635ab","Type":"ContainerStarted","Data":"cad9a0ede951aea32182082df48562a18978432566fcea7718862ce05e13d802"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.934910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" event={"ID":"814b126e-45fe-4be9-90f6-e95380af0957","Type":"ContainerDied","Data":"0a4696cee387029a547a844dc83b1f1921ba37b2d5e001231becbf3b6fe99890"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.934994 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pk47x" Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.941844 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"90244b70-b4fa-4b40-a962-119168333566","Type":"ContainerStarted","Data":"49c64c2ced6a75d2689234d16befb3090433cd9f5e100d7229b9f143a9207e8b"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.947697 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f3078d2-396d-4f2a-913f-b5c5555e568d","Type":"ContainerStarted","Data":"b0741614d62dbc0477b28a27d15a3795e17bbc95126b36163264d300ade3181d"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.950286 4698 generic.go:334] "Generic (PLEG): container finished" podID="8f445316-4106-46e7-9302-0d79c368a6db" containerID="777560b744b9fa09e151e26e2140a3b1110e726574160180a49697d465a53d63" exitCode=0 Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.950344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" event={"ID":"8f445316-4106-46e7-9302-0d79c368a6db","Type":"ContainerDied","Data":"777560b744b9fa09e151e26e2140a3b1110e726574160180a49697d465a53d63"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.955264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6702faf6-e3b2-44f8-a033-ba5fd85af368","Type":"ContainerStarted","Data":"6838ec9cc63bb63b152ad9e3706cacc4c43f2ff68c440a16808ae272e9331044"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.958349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerStarted","Data":"cacb6a3302f9917daab8ef528c2cbd9d6e2204ae718b2b2e946d0a545bf7a40d"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.960395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-24vqt" event={"ID":"b64163c4-e040-4bec-a585-c55f9d05e948","Type":"ContainerStarted","Data":"d6119e17beb6e925500d683b24dd10f15ce27b609ff72e3dabe322a0bf81d5ad"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.962998 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerStarted","Data":"4ec8400d779dd827fe40344717c9809c218e498a424def8f0c6350fb1d95fe72"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.965070 4698 generic.go:334] "Generic (PLEG): container finished" podID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerID="29b6956ea79e45c527b543af6fea627bfc693d1f9b543dd812693e4163fcdcc0" exitCode=0 Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.965926 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" event={"ID":"ffcd39f3-8368-43b1-beaa-ec7e75468bad","Type":"ContainerDied","Data":"29b6956ea79e45c527b543af6fea627bfc693d1f9b543dd812693e4163fcdcc0"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.971036 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" event={"ID":"6acfbc9a-615e-4529-b6bb-3aceb87f9ca3","Type":"ContainerDied","Data":"cf622e93e278703b09fe02a7c043dca1c0efd6ab0a4c04f09f45bdfb1dc1618f"} Oct 14 10:12:53 crc kubenswrapper[4698]: I1014 10:12:53.971126 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mxrtr" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.075344 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.092081 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2cb6b"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.096599 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pk47x"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.125734 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.132451 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mxrtr"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.218423 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-m8cgb"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.222690 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.225555 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m8cgb"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.228351 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.263798 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-combined-ca-bundle\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ef312e-a1ef-4635-a052-31f0b3a7e742-config\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328253 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/47ef312e-a1ef-4635-a052-31f0b3a7e742-kube-api-access-tcfqk\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328280 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovs-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.328328 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovn-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.439131 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441118 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ef312e-a1ef-4635-a052-31f0b3a7e742-config\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441240 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/47ef312e-a1ef-4635-a052-31f0b3a7e742-kube-api-access-tcfqk\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovs-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovn-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441415 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.441453 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-combined-ca-bundle\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.444155 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47ef312e-a1ef-4635-a052-31f0b3a7e742-config\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.444282 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovn-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.444846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47ef312e-a1ef-4635-a052-31f0b3a7e742-ovs-rundir\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.453024 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.453050 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ef312e-a1ef-4635-a052-31f0b3a7e742-combined-ca-bundle\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.470500 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.476387 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfqk\" (UniqueName: \"kubernetes.io/projected/47ef312e-a1ef-4635-a052-31f0b3a7e742-kube-api-access-tcfqk\") pod \"ovn-controller-metrics-m8cgb\" (UID: \"47ef312e-a1ef-4635-a052-31f0b3a7e742\") " pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.478130 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.482510 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.548070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.548600 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.548645 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjlx\" (UniqueName: \"kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.548701 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.556661 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.620799 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m8cgb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.650620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.650689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.650720 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.650760 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjlx\" (UniqueName: \"kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.651776 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.652278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.662234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.672742 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.687121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjlx\" (UniqueName: \"kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx\") pod \"dnsmasq-dns-7fd796d7df-z6fd5\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.724168 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.727693 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.738016 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.755887 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.755947 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.756163 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.756195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn76r\" (UniqueName: \"kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.756474 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.757507 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.847851 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.863018 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.863094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.863113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.863164 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.863184 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn76r\" (UniqueName: \"kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.864041 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.864234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.864592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.865540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.885222 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn76r\" (UniqueName: \"kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r\") pod \"dnsmasq-dns-86db49b7ff-xfwd9\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.988943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f","Type":"ContainerStarted","Data":"726a3b0bffac96f365afed85722e895c6c8b2f1b659d5ded925b15168e81d366"} Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.990990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" event={"ID":"8f445316-4106-46e7-9302-0d79c368a6db","Type":"ContainerStarted","Data":"d3368011dd8059e870d110fa3dedeaebb33d366c2c19f2f2ecad8cf7982583e3"} Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.991064 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.993080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" event={"ID":"ffcd39f3-8368-43b1-beaa-ec7e75468bad","Type":"ContainerStarted","Data":"4f8cbc698604c6926d1979429c8d57beafb8cf3227bd05667f92faba84b53011"} Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.993202 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="dnsmasq-dns" containerID="cri-o://4f8cbc698604c6926d1979429c8d57beafb8cf3227bd05667f92faba84b53011" gracePeriod=10 Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.993220 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:54 crc kubenswrapper[4698]: I1014 10:12:54.997469 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2cb6b" event={"ID":"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe","Type":"ContainerStarted","Data":"d8797690c8e483225892d48f49e292eb34c3cb813076959ec27f72eacca49488"} Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.000117 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb","Type":"ContainerStarted","Data":"38948c33c10ce8c7ec53b6c03956a4e7b383f736cda82205a716a7626edd9878"} Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.013617 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" podStartSLOduration=3.198190736 podStartE2EDuration="16.013569839s" podCreationTimestamp="2025-10-14 10:12:39 +0000 UTC" firstStartedPulling="2025-10-14 10:12:40.02512389 +0000 UTC m=+941.722423306" lastFinishedPulling="2025-10-14 10:12:52.840502993 +0000 UTC m=+954.537802409" observedRunningTime="2025-10-14 10:12:55.008188623 +0000 UTC m=+956.705488049" watchObservedRunningTime="2025-10-14 10:12:55.013569839 +0000 UTC m=+956.710869255" Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.033487 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acfbc9a-615e-4529-b6bb-3aceb87f9ca3" path="/var/lib/kubelet/pods/6acfbc9a-615e-4529-b6bb-3aceb87f9ca3/volumes" Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.034061 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814b126e-45fe-4be9-90f6-e95380af0957" path="/var/lib/kubelet/pods/814b126e-45fe-4be9-90f6-e95380af0957/volumes" Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.043618 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" podStartSLOduration=3.237312938 podStartE2EDuration="16.043599028s" podCreationTimestamp="2025-10-14 10:12:39 +0000 UTC" firstStartedPulling="2025-10-14 10:12:40.040466214 +0000 UTC m=+941.737765630" lastFinishedPulling="2025-10-14 10:12:52.846752304 +0000 UTC m=+954.544051720" observedRunningTime="2025-10-14 10:12:55.039153529 +0000 UTC m=+956.736452955" watchObservedRunningTime="2025-10-14 10:12:55.043599028 +0000 UTC m=+956.740898444" Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.139801 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.528643 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m8cgb"] Oct 14 10:12:55 crc kubenswrapper[4698]: I1014 10:12:55.669836 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:12:56 crc kubenswrapper[4698]: I1014 10:12:56.008073 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:12:56 crc kubenswrapper[4698]: I1014 10:12:56.032427 4698 generic.go:334] "Generic (PLEG): container finished" podID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerID="4f8cbc698604c6926d1979429c8d57beafb8cf3227bd05667f92faba84b53011" exitCode=0 Oct 14 10:12:56 crc kubenswrapper[4698]: I1014 10:12:56.032649 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="dnsmasq-dns" containerID="cri-o://d3368011dd8059e870d110fa3dedeaebb33d366c2c19f2f2ecad8cf7982583e3" gracePeriod=10 Oct 14 10:12:56 crc kubenswrapper[4698]: I1014 10:12:56.032717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" event={"ID":"ffcd39f3-8368-43b1-beaa-ec7e75468bad","Type":"ContainerDied","Data":"4f8cbc698604c6926d1979429c8d57beafb8cf3227bd05667f92faba84b53011"} Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.048258 4698 generic.go:334] "Generic (PLEG): container finished" podID="8f445316-4106-46e7-9302-0d79c368a6db" containerID="d3368011dd8059e870d110fa3dedeaebb33d366c2c19f2f2ecad8cf7982583e3" exitCode=0 Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.048307 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" event={"ID":"8f445316-4106-46e7-9302-0d79c368a6db","Type":"ContainerDied","Data":"d3368011dd8059e870d110fa3dedeaebb33d366c2c19f2f2ecad8cf7982583e3"} Oct 14 10:12:57 crc kubenswrapper[4698]: W1014 10:12:57.157811 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04ad8eb3_ebe0_46a4_9aa2_1fdd58dd4ea7.slice/crio-32e829288e5dffb12c2cd3fe34ec5755b032e97065f976502eca065200b0dd3f WatchSource:0}: Error finding container 32e829288e5dffb12c2cd3fe34ec5755b032e97065f976502eca065200b0dd3f: Status 404 returned error can't find the container with id 32e829288e5dffb12c2cd3fe34ec5755b032e97065f976502eca065200b0dd3f Oct 14 10:12:57 crc kubenswrapper[4698]: W1014 10:12:57.163834 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ef312e_a1ef_4635_a052_31f0b3a7e742.slice/crio-c1da57710350ca54f854c29db4a7523d15c05518ce510d6e61d9638b15eedd21 WatchSource:0}: Error finding container c1da57710350ca54f854c29db4a7523d15c05518ce510d6e61d9638b15eedd21: Status 404 returned error can't find the container with id c1da57710350ca54f854c29db4a7523d15c05518ce510d6e61d9638b15eedd21 Oct 14 10:12:57 crc kubenswrapper[4698]: W1014 10:12:57.166672 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1d31398_a2ca_48ab_b0b1_425d83342bbe.slice/crio-6d5c7d52b3e90ddbdfccb69bb40323824b244abf42e8c121eb07a634eb2f1d84 WatchSource:0}: Error finding container 6d5c7d52b3e90ddbdfccb69bb40323824b244abf42e8c121eb07a634eb2f1d84: Status 404 returned error can't find the container with id 6d5c7d52b3e90ddbdfccb69bb40323824b244abf42e8c121eb07a634eb2f1d84 Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.249673 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.311791 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9sbw\" (UniqueName: \"kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw\") pod \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.311935 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config\") pod \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.312067 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc\") pod \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\" (UID: \"ffcd39f3-8368-43b1-beaa-ec7e75468bad\") " Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.323379 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw" (OuterVolumeSpecName: "kube-api-access-z9sbw") pod "ffcd39f3-8368-43b1-beaa-ec7e75468bad" (UID: "ffcd39f3-8368-43b1-beaa-ec7e75468bad"). InnerVolumeSpecName "kube-api-access-z9sbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.363343 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffcd39f3-8368-43b1-beaa-ec7e75468bad" (UID: "ffcd39f3-8368-43b1-beaa-ec7e75468bad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.414448 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9sbw\" (UniqueName: \"kubernetes.io/projected/ffcd39f3-8368-43b1-beaa-ec7e75468bad-kube-api-access-z9sbw\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.414485 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.416614 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config" (OuterVolumeSpecName: "config") pod "ffcd39f3-8368-43b1-beaa-ec7e75468bad" (UID: "ffcd39f3-8368-43b1-beaa-ec7e75468bad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:12:57 crc kubenswrapper[4698]: I1014 10:12:57.516323 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcd39f3-8368-43b1-beaa-ec7e75468bad-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.058817 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" event={"ID":"a1d31398-a2ca-48ab-b0b1-425d83342bbe","Type":"ContainerStarted","Data":"6d5c7d52b3e90ddbdfccb69bb40323824b244abf42e8c121eb07a634eb2f1d84"} Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.062118 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" event={"ID":"ffcd39f3-8368-43b1-beaa-ec7e75468bad","Type":"ContainerDied","Data":"9c0e4231651c1d36f18775534eef2ea8429bbe8c00f52f8c7fb9b7257706e152"} Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.062177 4698 scope.go:117] "RemoveContainer" containerID="4f8cbc698604c6926d1979429c8d57beafb8cf3227bd05667f92faba84b53011" Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.062182 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ldb47" Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.064719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" event={"ID":"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7","Type":"ContainerStarted","Data":"32e829288e5dffb12c2cd3fe34ec5755b032e97065f976502eca065200b0dd3f"} Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.067299 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m8cgb" event={"ID":"47ef312e-a1ef-4635-a052-31f0b3a7e742","Type":"ContainerStarted","Data":"c1da57710350ca54f854c29db4a7523d15c05518ce510d6e61d9638b15eedd21"} Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.096369 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:58 crc kubenswrapper[4698]: I1014 10:12:58.102005 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ldb47"] Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.033693 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" path="/var/lib/kubelet/pods/ffcd39f3-8368-43b1-beaa-ec7e75468bad/volumes" Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.865648 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.966333 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config\") pod \"8f445316-4106-46e7-9302-0d79c368a6db\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.966485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zqt\" (UniqueName: \"kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt\") pod \"8f445316-4106-46e7-9302-0d79c368a6db\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.966548 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc\") pod \"8f445316-4106-46e7-9302-0d79c368a6db\" (UID: \"8f445316-4106-46e7-9302-0d79c368a6db\") " Oct 14 10:12:59 crc kubenswrapper[4698]: I1014 10:12:59.989594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt" (OuterVolumeSpecName: "kube-api-access-22zqt") pod "8f445316-4106-46e7-9302-0d79c368a6db" (UID: "8f445316-4106-46e7-9302-0d79c368a6db"). InnerVolumeSpecName "kube-api-access-22zqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.019913 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config" (OuterVolumeSpecName: "config") pod "8f445316-4106-46e7-9302-0d79c368a6db" (UID: "8f445316-4106-46e7-9302-0d79c368a6db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.020005 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f445316-4106-46e7-9302-0d79c368a6db" (UID: "8f445316-4106-46e7-9302-0d79c368a6db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.068671 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.068715 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zqt\" (UniqueName: \"kubernetes.io/projected/8f445316-4106-46e7-9302-0d79c368a6db-kube-api-access-22zqt\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.068732 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f445316-4106-46e7-9302-0d79c368a6db-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.086631 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" event={"ID":"8f445316-4106-46e7-9302-0d79c368a6db","Type":"ContainerDied","Data":"21434ebb1f4c12a2b495b54e5be0116afdc37b2498543ea5587859ef1308d6ec"} Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.086811 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.124688 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:13:00 crc kubenswrapper[4698]: I1014 10:13:00.130893 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-pmpg8"] Oct 14 10:13:01 crc kubenswrapper[4698]: I1014 10:13:01.025784 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f445316-4106-46e7-9302-0d79c368a6db" path="/var/lib/kubelet/pods/8f445316-4106-46e7-9302-0d79c368a6db/volumes" Oct 14 10:13:03 crc kubenswrapper[4698]: I1014 10:13:03.546563 4698 scope.go:117] "RemoveContainer" containerID="29b6956ea79e45c527b543af6fea627bfc693d1f9b543dd812693e4163fcdcc0" Oct 14 10:13:04 crc kubenswrapper[4698]: I1014 10:13:04.709670 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-pmpg8" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: i/o timeout" Oct 14 10:13:05 crc kubenswrapper[4698]: I1014 10:13:05.690025 4698 scope.go:117] "RemoveContainer" containerID="d3368011dd8059e870d110fa3dedeaebb33d366c2c19f2f2ecad8cf7982583e3" Oct 14 10:13:06 crc kubenswrapper[4698]: I1014 10:13:06.173754 4698 scope.go:117] "RemoveContainer" containerID="777560b744b9fa09e151e26e2140a3b1110e726574160180a49697d465a53d63" Oct 14 10:13:06 crc kubenswrapper[4698]: E1014 10:13:06.902983 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Oct 14 10:13:06 crc kubenswrapper[4698]: E1014 10:13:06.903364 4698 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Oct 14 10:13:06 crc kubenswrapper[4698]: E1014 10:13:06.903499 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmzq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6702faf6-e3b2-44f8-a033-ba5fd85af368): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 10:13:06 crc kubenswrapper[4698]: E1014 10:13:06.904718 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.156049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3bc7f78-d69f-426c-9aeb-4837d25635ab","Type":"ContainerStarted","Data":"e88e41e72ad1998e1829c501a9b7010047c3afb16dc6e9c574aec711cfa553fc"} Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.156968 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.160732 4698 generic.go:334] "Generic (PLEG): container finished" podID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerID="940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633" exitCode=0 Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.160839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" event={"ID":"a1d31398-a2ca-48ab-b0b1-425d83342bbe","Type":"ContainerDied","Data":"940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633"} Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.180266 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.55031128 podStartE2EDuration="24.180240371s" podCreationTimestamp="2025-10-14 10:12:43 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.782337795 +0000 UTC m=+955.479637211" lastFinishedPulling="2025-10-14 10:13:05.412266886 +0000 UTC m=+967.109566302" observedRunningTime="2025-10-14 10:13:07.179331335 +0000 UTC m=+968.876630761" watchObservedRunningTime="2025-10-14 10:13:07.180240371 +0000 UTC m=+968.877539787" Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.185935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f3078d2-396d-4f2a-913f-b5c5555e568d","Type":"ContainerStarted","Data":"077fcebbfb7ad349761b01ebd01c941f7b097f32858712947bb65e6899019cf2"} Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.195159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2cb6b" event={"ID":"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe","Type":"ContainerStarted","Data":"dc295e78b448b70fcd2a400dc88232bfa4b3991c1b032600b0b1ab4ab96f5d9b"} Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.216084 4698 generic.go:334] "Generic (PLEG): container finished" podID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerID="4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7" exitCode=0 Oct 14 10:13:07 crc kubenswrapper[4698]: I1014 10:13:07.216683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" event={"ID":"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7","Type":"ContainerDied","Data":"4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7"} Oct 14 10:13:07 crc kubenswrapper[4698]: E1014 10:13:07.221274 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.229132 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-24vqt" event={"ID":"b64163c4-e040-4bec-a585-c55f9d05e948","Type":"ContainerStarted","Data":"f299a2030a939e8d6d79da65d7bfffbbd7b1ba3060c5d51f86d9146bfbf6ca51"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.229819 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-24vqt" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.231010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m8cgb" event={"ID":"47ef312e-a1ef-4635-a052-31f0b3a7e742","Type":"ContainerStarted","Data":"85dad1f160b1dd2824ab17f94dd85326dce07827bba71996e86749ccbbea6e5e"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.234224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb","Type":"ContainerStarted","Data":"7323744a88ac068b6d1cf4b3e57790bb87e3f02a005fabc018b5660107a1e9ff"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.234250 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"884f9a07-9f80-44ff-a1e5-805d6d5ef6fb","Type":"ContainerStarted","Data":"bb449108db0847fd3b8d02e2510ba03911f1fa64b0c6dd5a36c495257970c0cf"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.237992 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" event={"ID":"a1d31398-a2ca-48ab-b0b1-425d83342bbe","Type":"ContainerStarted","Data":"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.238122 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.239822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"90244b70-b4fa-4b40-a962-119168333566","Type":"ContainerStarted","Data":"3210b5059be17de0ff9a869fbad8ab784937fb28ff564eaea96d58b87880a0b5"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.241751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f","Type":"ContainerStarted","Data":"bfe3829c962ac0099ec8d32946b43bd036705ee269f1e3c34506fc0a1529b003"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.241805 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"468f15c4-08a4-4e2e-a65d-7a679b1d3a3f","Type":"ContainerStarted","Data":"26840b3df91ebcba525d4c5b14c6d5b4b4854b1dcf386dfb8ee36ffd0240d05b"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.243639 4698 generic.go:334] "Generic (PLEG): container finished" podID="62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe" containerID="dc295e78b448b70fcd2a400dc88232bfa4b3991c1b032600b0b1ab4ab96f5d9b" exitCode=0 Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.243735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2cb6b" event={"ID":"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe","Type":"ContainerDied","Data":"dc295e78b448b70fcd2a400dc88232bfa4b3991c1b032600b0b1ab4ab96f5d9b"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.243791 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2cb6b" event={"ID":"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe","Type":"ContainerStarted","Data":"92d53ffa99c44069b501e670c4e28eae8b3d81b20950189200431f9da0a37b7f"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.257184 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-24vqt" podStartSLOduration=8.397286617 podStartE2EDuration="20.257163081s" podCreationTimestamp="2025-10-14 10:12:48 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.847446819 +0000 UTC m=+955.544746235" lastFinishedPulling="2025-10-14 10:13:05.707323243 +0000 UTC m=+967.404622699" observedRunningTime="2025-10-14 10:13:08.249627053 +0000 UTC m=+969.946926479" watchObservedRunningTime="2025-10-14 10:13:08.257163081 +0000 UTC m=+969.954462497" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.259841 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" event={"ID":"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7","Type":"ContainerStarted","Data":"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64"} Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.260007 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.269569 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" podStartSLOduration=14.269527769 podStartE2EDuration="14.269527769s" podCreationTimestamp="2025-10-14 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:13:08.267128549 +0000 UTC m=+969.964427965" watchObservedRunningTime="2025-10-14 10:13:08.269527769 +0000 UTC m=+969.966827225" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.299108 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.192704967 podStartE2EDuration="20.299086234s" podCreationTimestamp="2025-10-14 10:12:48 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.933041305 +0000 UTC m=+955.630340721" lastFinishedPulling="2025-10-14 10:13:06.039422562 +0000 UTC m=+967.736721988" observedRunningTime="2025-10-14 10:13:08.290212857 +0000 UTC m=+969.987512303" watchObservedRunningTime="2025-10-14 10:13:08.299086234 +0000 UTC m=+969.996385650" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.314188 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.645267381 podStartE2EDuration="16.31416603s" podCreationTimestamp="2025-10-14 10:12:52 +0000 UTC" firstStartedPulling="2025-10-14 10:12:54.388133493 +0000 UTC m=+956.085432909" lastFinishedPulling="2025-10-14 10:13:06.057032142 +0000 UTC m=+967.754331558" observedRunningTime="2025-10-14 10:13:08.311560195 +0000 UTC m=+970.008859651" watchObservedRunningTime="2025-10-14 10:13:08.31416603 +0000 UTC m=+970.011465456" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.362798 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-m8cgb" podStartSLOduration=5.242564873 podStartE2EDuration="14.362780147s" podCreationTimestamp="2025-10-14 10:12:54 +0000 UTC" firstStartedPulling="2025-10-14 10:12:57.167700298 +0000 UTC m=+958.864999714" lastFinishedPulling="2025-10-14 10:13:06.287915572 +0000 UTC m=+967.985214988" observedRunningTime="2025-10-14 10:13:08.359525873 +0000 UTC m=+970.056825339" watchObservedRunningTime="2025-10-14 10:13:08.362780147 +0000 UTC m=+970.060079563" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.381787 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" podStartSLOduration=14.381749496 podStartE2EDuration="14.381749496s" podCreationTimestamp="2025-10-14 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:13:08.380040316 +0000 UTC m=+970.077339732" watchObservedRunningTime="2025-10-14 10:13:08.381749496 +0000 UTC m=+970.079048912" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.592355 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 10:13:08 crc kubenswrapper[4698]: I1014 10:13:08.592433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 10:13:09 crc kubenswrapper[4698]: I1014 10:13:09.711418 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 14 10:13:10 crc kubenswrapper[4698]: I1014 10:13:10.283292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerStarted","Data":"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81"} Oct 14 10:13:10 crc kubenswrapper[4698]: I1014 10:13:10.711471 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 14 10:13:10 crc kubenswrapper[4698]: I1014 10:13:10.771559 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 14 10:13:11 crc kubenswrapper[4698]: I1014 10:13:11.631424 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 10:13:12 crc kubenswrapper[4698]: I1014 10:13:12.341903 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.306933 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2cb6b" event={"ID":"62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe","Type":"ContainerStarted","Data":"cc709685b6c53d80b3b98f7b2148f7ad2d7c354f848af4677e20c0351eef8fb4"} Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.307025 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.307052 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.309663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerStarted","Data":"02ab09bd1ef174e5d51d3059758a373cd337d02d2d18f0123578e5d49f2c0d75"} Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.327877 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2cb6b" podStartSLOduration=14.271044548 podStartE2EDuration="25.327860478s" podCreationTimestamp="2025-10-14 10:12:48 +0000 UTC" firstStartedPulling="2025-10-14 10:12:54.062268354 +0000 UTC m=+955.759567770" lastFinishedPulling="2025-10-14 10:13:05.119084294 +0000 UTC m=+966.816383700" observedRunningTime="2025-10-14 10:13:13.325406707 +0000 UTC m=+975.022706143" watchObservedRunningTime="2025-10-14 10:13:13.327860478 +0000 UTC m=+975.025159894" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.651361 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.813882 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 14 10:13:13 crc kubenswrapper[4698]: E1014 10:13:13.814575 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="init" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814601 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="init" Oct 14 10:13:13 crc kubenswrapper[4698]: E1014 10:13:13.814632 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814642 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: E1014 10:13:13.814665 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814674 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: E1014 10:13:13.814687 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="init" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814695 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="init" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814966 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f445316-4106-46e7-9302-0d79c368a6db" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.814999 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcd39f3-8368-43b1-beaa-ec7e75468bad" containerName="dnsmasq-dns" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.816286 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.821286 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.821290 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.822861 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-blptl" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.823151 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.838640 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908777 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908832 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908856 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxpk\" (UniqueName: \"kubernetes.io/projected/b35b471e-f011-42c9-998a-d23ec21ad1a9-kube-api-access-4sxpk\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908890 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-scripts\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.908954 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-config\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.909020 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:13 crc kubenswrapper[4698]: I1014 10:13:13.941935 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.010501 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.010575 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.010601 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sxpk\" (UniqueName: \"kubernetes.io/projected/b35b471e-f011-42c9-998a-d23ec21ad1a9-kube-api-access-4sxpk\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.011717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.011806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-scripts\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.011859 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-config\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.011902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.012603 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.012738 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-scripts\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.013494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b35b471e-f011-42c9-998a-d23ec21ad1a9-config\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.019094 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.019321 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.024306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35b471e-f011-42c9-998a-d23ec21ad1a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.031795 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sxpk\" (UniqueName: \"kubernetes.io/projected/b35b471e-f011-42c9-998a-d23ec21ad1a9-kube-api-access-4sxpk\") pod \"ovn-northd-0\" (UID: \"b35b471e-f011-42c9-998a-d23ec21ad1a9\") " pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.137586 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.625032 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 10:13:14 crc kubenswrapper[4698]: W1014 10:13:14.631863 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35b471e_f011_42c9_998a_d23ec21ad1a9.slice/crio-d5db4a6fca961b4dd04befa8a6f5ecd3d8b2ea0abe5362ef9ab27272408c2c80 WatchSource:0}: Error finding container d5db4a6fca961b4dd04befa8a6f5ecd3d8b2ea0abe5362ef9ab27272408c2c80: Status 404 returned error can't find the container with id d5db4a6fca961b4dd04befa8a6f5ecd3d8b2ea0abe5362ef9ab27272408c2c80 Oct 14 10:13:14 crc kubenswrapper[4698]: I1014 10:13:14.849948 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:13:15 crc kubenswrapper[4698]: I1014 10:13:15.140960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:13:15 crc kubenswrapper[4698]: I1014 10:13:15.211808 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:13:15 crc kubenswrapper[4698]: I1014 10:13:15.332348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b35b471e-f011-42c9-998a-d23ec21ad1a9","Type":"ContainerStarted","Data":"d5db4a6fca961b4dd04befa8a6f5ecd3d8b2ea0abe5362ef9ab27272408c2c80"} Oct 14 10:13:15 crc kubenswrapper[4698]: I1014 10:13:15.332597 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="dnsmasq-dns" containerID="cri-o://137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e" gracePeriod=10 Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.040035 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.042466 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.066931 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.158342 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.158876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.159018 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.159076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.159139 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn546\" (UniqueName: \"kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.261742 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.261902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.261982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn546\" (UniqueName: \"kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.262057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.262089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.262803 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.263582 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.264230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.264288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.274086 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.283169 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn546\" (UniqueName: \"kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546\") pod \"dnsmasq-dns-698758b865-9n6ld\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.340624 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f3078d2-396d-4f2a-913f-b5c5555e568d" containerID="077fcebbfb7ad349761b01ebd01c941f7b097f32858712947bb65e6899019cf2" exitCode=0 Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.340707 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f3078d2-396d-4f2a-913f-b5c5555e568d","Type":"ContainerDied","Data":"077fcebbfb7ad349761b01ebd01c941f7b097f32858712947bb65e6899019cf2"} Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.346432 4698 generic.go:334] "Generic (PLEG): container finished" podID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerID="137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e" exitCode=0 Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.346490 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.346481 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" event={"ID":"a1d31398-a2ca-48ab-b0b1-425d83342bbe","Type":"ContainerDied","Data":"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e"} Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.346696 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-z6fd5" event={"ID":"a1d31398-a2ca-48ab-b0b1-425d83342bbe","Type":"ContainerDied","Data":"6d5c7d52b3e90ddbdfccb69bb40323824b244abf42e8c121eb07a634eb2f1d84"} Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.346756 4698 scope.go:117] "RemoveContainer" containerID="137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.351662 4698 generic.go:334] "Generic (PLEG): container finished" podID="90244b70-b4fa-4b40-a962-119168333566" containerID="3210b5059be17de0ff9a869fbad8ab784937fb28ff564eaea96d58b87880a0b5" exitCode=0 Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.351810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"90244b70-b4fa-4b40-a962-119168333566","Type":"ContainerDied","Data":"3210b5059be17de0ff9a869fbad8ab784937fb28ff564eaea96d58b87880a0b5"} Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.363327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb\") pod \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.363539 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc\") pod \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.363576 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjlx\" (UniqueName: \"kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx\") pod \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.363643 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config\") pod \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\" (UID: \"a1d31398-a2ca-48ab-b0b1-425d83342bbe\") " Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.369283 4698 scope.go:117] "RemoveContainer" containerID="940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.370666 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx" (OuterVolumeSpecName: "kube-api-access-xqjlx") pod "a1d31398-a2ca-48ab-b0b1-425d83342bbe" (UID: "a1d31398-a2ca-48ab-b0b1-425d83342bbe"). InnerVolumeSpecName "kube-api-access-xqjlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.370921 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjlx\" (UniqueName: \"kubernetes.io/projected/a1d31398-a2ca-48ab-b0b1-425d83342bbe-kube-api-access-xqjlx\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.393588 4698 scope.go:117] "RemoveContainer" containerID="137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e" Oct 14 10:13:16 crc kubenswrapper[4698]: E1014 10:13:16.399230 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e\": container with ID starting with 137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e not found: ID does not exist" containerID="137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.399298 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e"} err="failed to get container status \"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e\": rpc error: code = NotFound desc = could not find container \"137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e\": container with ID starting with 137417fabf771bf4419fc7ea30100b10832074675a36e20070f18fdf2f6a721e not found: ID does not exist" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.399343 4698 scope.go:117] "RemoveContainer" containerID="940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633" Oct 14 10:13:16 crc kubenswrapper[4698]: E1014 10:13:16.399816 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633\": container with ID starting with 940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633 not found: ID does not exist" containerID="940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.399885 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633"} err="failed to get container status \"940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633\": rpc error: code = NotFound desc = could not find container \"940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633\": container with ID starting with 940951e30daddc15a0b58cabe56fb7baebf3771fb2e7f33e2aadd390499e2633 not found: ID does not exist" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.430530 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config" (OuterVolumeSpecName: "config") pod "a1d31398-a2ca-48ab-b0b1-425d83342bbe" (UID: "a1d31398-a2ca-48ab-b0b1-425d83342bbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.431751 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1d31398-a2ca-48ab-b0b1-425d83342bbe" (UID: "a1d31398-a2ca-48ab-b0b1-425d83342bbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.437431 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1d31398-a2ca-48ab-b0b1-425d83342bbe" (UID: "a1d31398-a2ca-48ab-b0b1-425d83342bbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.472448 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.472485 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.472498 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d31398-a2ca-48ab-b0b1-425d83342bbe-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.582712 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.718946 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:13:16 crc kubenswrapper[4698]: I1014 10:13:16.728283 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-z6fd5"] Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.033316 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" path="/var/lib/kubelet/pods/a1d31398-a2ca-48ab-b0b1-425d83342bbe/volumes" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.075674 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:13:17 crc kubenswrapper[4698]: W1014 10:13:17.094229 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod414ba38b_6cfb_48ae_b818_6f8544558bf1.slice/crio-c67d968e5fc4171575050fd317cbe40aa82715a02641d30ef7cefa664e8368b5 WatchSource:0}: Error finding container c67d968e5fc4171575050fd317cbe40aa82715a02641d30ef7cefa664e8368b5: Status 404 returned error can't find the container with id c67d968e5fc4171575050fd317cbe40aa82715a02641d30ef7cefa664e8368b5 Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.186920 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.187403 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="init" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.187465 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="init" Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.187519 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="dnsmasq-dns" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.187568 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="dnsmasq-dns" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.187839 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d31398-a2ca-48ab-b0b1-425d83342bbe" containerName="dnsmasq-dns" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.193086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.195892 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6d54v" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.196217 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.196333 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.196619 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.258914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.285868 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-cache\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.286302 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5p9\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-kube-api-access-dq5p9\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.286343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.286383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.286462 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-lock\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.361291 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b35b471e-f011-42c9-998a-d23ec21ad1a9","Type":"ContainerStarted","Data":"1e9d795d3b03f884f88861aa964046640a944b482ecbb0314208eab2cbdac64b"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.361344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b35b471e-f011-42c9-998a-d23ec21ad1a9","Type":"ContainerStarted","Data":"10b590ba1c59d5d7a8fef4abf29407a9c7949007eb660463fefaf49f022faa0c"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.362251 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.368125 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"90244b70-b4fa-4b40-a962-119168333566","Type":"ContainerStarted","Data":"91fff6a2414973e104f59588e46aa20e8dfb36f4991875ae16709510775c3d24"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.371224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f3078d2-396d-4f2a-913f-b5c5555e568d","Type":"ContainerStarted","Data":"2153ee0e4496330f7b5146d4f6c8301f255245a31f9c9412a010564b7479c3b8"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.374160 4698 generic.go:334] "Generic (PLEG): container finished" podID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerID="3b367c288626c156bc47c704b646c9112c0b0e5a79bb11340b8697a51beb034c" exitCode=0 Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.374223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9n6ld" event={"ID":"414ba38b-6cfb-48ae-b818-6f8544558bf1","Type":"ContainerDied","Data":"3b367c288626c156bc47c704b646c9112c0b0e5a79bb11340b8697a51beb034c"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.374259 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9n6ld" event={"ID":"414ba38b-6cfb-48ae-b818-6f8544558bf1","Type":"ContainerStarted","Data":"c67d968e5fc4171575050fd317cbe40aa82715a02641d30ef7cefa664e8368b5"} Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.387573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.387716 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.387745 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.387813 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:17.887793929 +0000 UTC m=+979.585093345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.387724 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-lock\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.387991 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-cache\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.388222 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq5p9\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-kube-api-access-dq5p9\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.388403 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.388347 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-cache\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.388309 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0ca6729c-82ca-4f89-b732-7154ec9224bb-lock\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.388865 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.397656 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.9996560150000002 podStartE2EDuration="4.397634334s" podCreationTimestamp="2025-10-14 10:13:13 +0000 UTC" firstStartedPulling="2025-10-14 10:13:14.634320899 +0000 UTC m=+976.331620345" lastFinishedPulling="2025-10-14 10:13:16.032299248 +0000 UTC m=+977.729598664" observedRunningTime="2025-10-14 10:13:17.379024165 +0000 UTC m=+979.076323581" watchObservedRunningTime="2025-10-14 10:13:17.397634334 +0000 UTC m=+979.094933770" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.411152 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq5p9\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-kube-api-access-dq5p9\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.414703 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.554503265 podStartE2EDuration="35.414665937s" podCreationTimestamp="2025-10-14 10:12:42 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.847175441 +0000 UTC m=+955.544474847" lastFinishedPulling="2025-10-14 10:13:05.707338103 +0000 UTC m=+967.404637519" observedRunningTime="2025-10-14 10:13:17.414006637 +0000 UTC m=+979.111306063" watchObservedRunningTime="2025-10-14 10:13:17.414665937 +0000 UTC m=+979.111965363" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.417148 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.500318 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.783530096 podStartE2EDuration="37.500297714s" podCreationTimestamp="2025-10-14 10:12:40 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.444664194 +0000 UTC m=+955.141963610" lastFinishedPulling="2025-10-14 10:13:06.161431812 +0000 UTC m=+967.858731228" observedRunningTime="2025-10-14 10:13:17.490036817 +0000 UTC m=+979.187336263" watchObservedRunningTime="2025-10-14 10:13:17.500297714 +0000 UTC m=+979.197597130" Oct 14 10:13:17 crc kubenswrapper[4698]: I1014 10:13:17.898051 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.898468 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.898541 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:17 crc kubenswrapper[4698]: E1014 10:13:17.898659 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:18.898618389 +0000 UTC m=+980.595917815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:18 crc kubenswrapper[4698]: I1014 10:13:18.386681 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9n6ld" event={"ID":"414ba38b-6cfb-48ae-b818-6f8544558bf1","Type":"ContainerStarted","Data":"c795647333c10b386e2f260c64304f0cb354b12fd52757a250b2ba2949df1ebf"} Oct 14 10:13:18 crc kubenswrapper[4698]: I1014 10:13:18.416122 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-9n6ld" podStartSLOduration=2.416100343 podStartE2EDuration="2.416100343s" podCreationTimestamp="2025-10-14 10:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:13:18.413424765 +0000 UTC m=+980.110724201" watchObservedRunningTime="2025-10-14 10:13:18.416100343 +0000 UTC m=+980.113399749" Oct 14 10:13:18 crc kubenswrapper[4698]: I1014 10:13:18.918262 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:18 crc kubenswrapper[4698]: E1014 10:13:18.918664 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:18 crc kubenswrapper[4698]: E1014 10:13:18.918721 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:18 crc kubenswrapper[4698]: E1014 10:13:18.918858 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:20.918818968 +0000 UTC m=+982.616118424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:19 crc kubenswrapper[4698]: I1014 10:13:19.395978 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:20 crc kubenswrapper[4698]: I1014 10:13:20.409164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6702faf6-e3b2-44f8-a033-ba5fd85af368","Type":"ContainerStarted","Data":"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9"} Oct 14 10:13:20 crc kubenswrapper[4698]: I1014 10:13:20.409977 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 10:13:20 crc kubenswrapper[4698]: I1014 10:13:20.427968 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.796084096 podStartE2EDuration="35.427942233s" podCreationTimestamp="2025-10-14 10:12:45 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.849359504 +0000 UTC m=+955.546658920" lastFinishedPulling="2025-10-14 10:13:19.481217651 +0000 UTC m=+981.178517057" observedRunningTime="2025-10-14 10:13:20.422932148 +0000 UTC m=+982.120231574" watchObservedRunningTime="2025-10-14 10:13:20.427942233 +0000 UTC m=+982.125241649" Oct 14 10:13:20 crc kubenswrapper[4698]: I1014 10:13:20.994640 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:20 crc kubenswrapper[4698]: E1014 10:13:20.994942 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:20 crc kubenswrapper[4698]: E1014 10:13:20.995424 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:20 crc kubenswrapper[4698]: E1014 10:13:20.995525 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:24.995503315 +0000 UTC m=+986.692802731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.143245 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vwbrf"] Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.144515 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.146928 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.148229 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.148276 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.172610 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vwbrf"] Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.183967 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vwbrf"] Oct 14 10:13:21 crc kubenswrapper[4698]: E1014 10:13:21.184829 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-5sx5b ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-vwbrf" podUID="b297d5f2-1407-46c4-9365-031883f300ec" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199118 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx5b\" (UniqueName: \"kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199291 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199349 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199370 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.199389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.201701 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fmnnt"] Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.205710 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.218971 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fmnnt"] Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301273 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sx5b\" (UniqueName: \"kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301437 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301494 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301525 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301570 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbszq\" (UniqueName: \"kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301634 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301692 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301720 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.301743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.304713 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.305224 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.305501 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.311036 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.311991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.313139 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.329327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sx5b\" (UniqueName: \"kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b\") pod \"swift-ring-rebalance-vwbrf\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403604 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403718 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403794 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbszq\" (UniqueName: \"kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403819 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403867 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.403982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.404540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.404570 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.404728 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.408492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.408989 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.411357 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.416686 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.427824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbszq\" (UniqueName: \"kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq\") pod \"swift-ring-rebalance-fmnnt\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.483197 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.523189 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.607939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608076 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sx5b\" (UniqueName: \"kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608307 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608342 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices\") pod \"b297d5f2-1407-46c4-9365-031883f300ec\" (UID: \"b297d5f2-1407-46c4-9365-031883f300ec\") " Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608337 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608659 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts" (OuterVolumeSpecName: "scripts") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608836 4698 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b297d5f2-1407-46c4-9365-031883f300ec-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.608854 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.609802 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.614066 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.614095 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.614258 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b" (OuterVolumeSpecName: "kube-api-access-5sx5b") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "kube-api-access-5sx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.615160 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b297d5f2-1407-46c4-9365-031883f300ec" (UID: "b297d5f2-1407-46c4-9365-031883f300ec"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.710519 4698 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b297d5f2-1407-46c4-9365-031883f300ec-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.710563 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sx5b\" (UniqueName: \"kubernetes.io/projected/b297d5f2-1407-46c4-9365-031883f300ec-kube-api-access-5sx5b\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.710574 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.710583 4698 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:21 crc kubenswrapper[4698]: I1014 10:13:21.710593 4698 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b297d5f2-1407-46c4-9365-031883f300ec-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.049280 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fmnnt"] Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.332800 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.332889 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.415867 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.428486 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fmnnt" event={"ID":"332a15eb-0ada-4f42-a34e-a7d2e9c46af2","Type":"ContainerStarted","Data":"8670958afd1ee2d0046315247bc769ca5f66f24cbf38a30dca66edcbc986a854"} Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.428516 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vwbrf" Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.513152 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vwbrf"] Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.539594 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-vwbrf"] Oct 14 10:13:22 crc kubenswrapper[4698]: I1014 10:13:22.540038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.034327 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b297d5f2-1407-46c4-9365-031883f300ec" path="/var/lib/kubelet/pods/b297d5f2-1407-46c4-9365-031883f300ec/volumes" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.652169 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mbhc5"] Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.655102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.662272 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mbhc5"] Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.667949 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.669201 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.737229 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.755223 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgx4\" (UniqueName: \"kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4\") pod \"keystone-db-create-mbhc5\" (UID: \"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389\") " pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.857526 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgx4\" (UniqueName: \"kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4\") pod \"keystone-db-create-mbhc5\" (UID: \"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389\") " pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.858817 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7sv8p"] Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.860636 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.867077 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7sv8p"] Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.908610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgx4\" (UniqueName: \"kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4\") pod \"keystone-db-create-mbhc5\" (UID: \"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389\") " pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.908640 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.908922 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.959465 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknhb\" (UniqueName: \"kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb\") pod \"placement-db-create-7sv8p\" (UID: \"57ba3855-be67-4359-b8c2-f62f45279695\") " pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:23 crc kubenswrapper[4698]: I1014 10:13:23.975169 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.061097 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknhb\" (UniqueName: \"kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb\") pod \"placement-db-create-7sv8p\" (UID: \"57ba3855-be67-4359-b8c2-f62f45279695\") " pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.098505 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknhb\" (UniqueName: \"kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb\") pod \"placement-db-create-7sv8p\" (UID: \"57ba3855-be67-4359-b8c2-f62f45279695\") " pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.114571 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-g7bp5"] Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.116599 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.139026 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g7bp5"] Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.163695 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5bz\" (UniqueName: \"kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz\") pod \"glance-db-create-g7bp5\" (UID: \"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e\") " pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.228079 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.265661 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5bz\" (UniqueName: \"kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz\") pod \"glance-db-create-g7bp5\" (UID: \"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e\") " pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.286822 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5bz\" (UniqueName: \"kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz\") pod \"glance-db-create-g7bp5\" (UID: \"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e\") " pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.447973 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:24 crc kubenswrapper[4698]: I1014 10:13:24.516977 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 14 10:13:25 crc kubenswrapper[4698]: I1014 10:13:25.088849 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:25 crc kubenswrapper[4698]: E1014 10:13:25.090752 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:25 crc kubenswrapper[4698]: E1014 10:13:25.090845 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:25 crc kubenswrapper[4698]: E1014 10:13:25.090937 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:33.090906433 +0000 UTC m=+994.788206039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:25 crc kubenswrapper[4698]: I1014 10:13:25.979743 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.480491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fmnnt" event={"ID":"332a15eb-0ada-4f42-a34e-a7d2e9c46af2","Type":"ContainerStarted","Data":"fadd7648836a665675efac92f610557a586104c8e90f8926592a0d3867061a63"} Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.484953 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7sv8p"] Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.490988 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g7bp5"] Oct 14 10:13:26 crc kubenswrapper[4698]: W1014 10:13:26.503477 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ba3855_be67_4359_b8c2_f62f45279695.slice/crio-f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c WatchSource:0}: Error finding container f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c: Status 404 returned error can't find the container with id f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.503781 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fmnnt" podStartSLOduration=1.591135504 podStartE2EDuration="5.503731262s" podCreationTimestamp="2025-10-14 10:13:21 +0000 UTC" firstStartedPulling="2025-10-14 10:13:22.058923415 +0000 UTC m=+983.756222841" lastFinishedPulling="2025-10-14 10:13:25.971519183 +0000 UTC m=+987.668818599" observedRunningTime="2025-10-14 10:13:26.501795966 +0000 UTC m=+988.199095392" watchObservedRunningTime="2025-10-14 10:13:26.503731262 +0000 UTC m=+988.201030678" Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.586838 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.665196 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.665816 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="dnsmasq-dns" containerID="cri-o://55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64" gracePeriod=10 Oct 14 10:13:26 crc kubenswrapper[4698]: I1014 10:13:26.683023 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mbhc5"] Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.228654 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.337719 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb\") pod \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.337881 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc\") pod \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.337918 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb\") pod \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.338094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config\") pod \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.338175 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn76r\" (UniqueName: \"kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r\") pod \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\" (UID: \"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7\") " Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.354050 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r" (OuterVolumeSpecName: "kube-api-access-qn76r") pod "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" (UID: "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7"). InnerVolumeSpecName "kube-api-access-qn76r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.399465 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" (UID: "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.402149 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" (UID: "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.406152 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config" (OuterVolumeSpecName: "config") pod "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" (UID: "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.420234 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" (UID: "04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.440826 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn76r\" (UniqueName: \"kubernetes.io/projected/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-kube-api-access-qn76r\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.440872 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.440886 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.440898 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.440909 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.489500 4698 generic.go:334] "Generic (PLEG): container finished" podID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerID="55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64" exitCode=0 Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.489590 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" event={"ID":"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7","Type":"ContainerDied","Data":"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.489605 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.489627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xfwd9" event={"ID":"04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7","Type":"ContainerDied","Data":"32e829288e5dffb12c2cd3fe34ec5755b032e97065f976502eca065200b0dd3f"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.489666 4698 scope.go:117] "RemoveContainer" containerID="55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.492084 4698 generic.go:334] "Generic (PLEG): container finished" podID="25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" containerID="364ebefb477fcdc353e748d44e22cab6cf13d97dbd07d8e235687087f87531ab" exitCode=0 Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.492243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g7bp5" event={"ID":"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e","Type":"ContainerDied","Data":"364ebefb477fcdc353e748d44e22cab6cf13d97dbd07d8e235687087f87531ab"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.492310 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g7bp5" event={"ID":"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e","Type":"ContainerStarted","Data":"ee3163a2b52118a9b34898b7492f46b67ba95fbcf1ce295b1440a32106a85ceb"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.495924 4698 generic.go:334] "Generic (PLEG): container finished" podID="2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" containerID="3d1530f026b17d36b6a0791719bfc9badeddb63aa7d4e272c2aa81d9647a7c4e" exitCode=0 Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.495983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbhc5" event={"ID":"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389","Type":"ContainerDied","Data":"3d1530f026b17d36b6a0791719bfc9badeddb63aa7d4e272c2aa81d9647a7c4e"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.496010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbhc5" event={"ID":"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389","Type":"ContainerStarted","Data":"36fd41a64d49568b98ece566642b3f1ad1720a9167dcbc77e8ed00f6d371c3a3"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.502259 4698 generic.go:334] "Generic (PLEG): container finished" podID="57ba3855-be67-4359-b8c2-f62f45279695" containerID="addd3471a62a7d28439c58402096e7808720bf2d9b76c32706b4ad29d62a77d5" exitCode=0 Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.502351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7sv8p" event={"ID":"57ba3855-be67-4359-b8c2-f62f45279695","Type":"ContainerDied","Data":"addd3471a62a7d28439c58402096e7808720bf2d9b76c32706b4ad29d62a77d5"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.502393 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7sv8p" event={"ID":"57ba3855-be67-4359-b8c2-f62f45279695","Type":"ContainerStarted","Data":"f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c"} Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.528283 4698 scope.go:117] "RemoveContainer" containerID="4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.557605 4698 scope.go:117] "RemoveContainer" containerID="55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64" Oct 14 10:13:27 crc kubenswrapper[4698]: E1014 10:13:27.558460 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64\": container with ID starting with 55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64 not found: ID does not exist" containerID="55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.558594 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64"} err="failed to get container status \"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64\": rpc error: code = NotFound desc = could not find container \"55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64\": container with ID starting with 55838e6f6f901c3ab4baf43f45a89fa474602df690c4b1b499970f90e3306f64 not found: ID does not exist" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.558709 4698 scope.go:117] "RemoveContainer" containerID="4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7" Oct 14 10:13:27 crc kubenswrapper[4698]: E1014 10:13:27.559279 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7\": container with ID starting with 4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7 not found: ID does not exist" containerID="4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.559345 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7"} err="failed to get container status \"4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7\": rpc error: code = NotFound desc = could not find container \"4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7\": container with ID starting with 4bb1c7f00a7d333732c5d090c61b5815920cdb7dccaa42c31a6717861bbcd5d7 not found: ID does not exist" Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.595826 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:13:27 crc kubenswrapper[4698]: I1014 10:13:27.601456 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xfwd9"] Oct 14 10:13:28 crc kubenswrapper[4698]: I1014 10:13:28.979971 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:28 crc kubenswrapper[4698]: I1014 10:13:28.985992 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:28 crc kubenswrapper[4698]: I1014 10:13:28.991872 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.027012 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" path="/var/lib/kubelet/pods/04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7/volumes" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.073694 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5bz\" (UniqueName: \"kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz\") pod \"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e\" (UID: \"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e\") " Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.073781 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfgx4\" (UniqueName: \"kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4\") pod \"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389\" (UID: \"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389\") " Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.073972 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nknhb\" (UniqueName: \"kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb\") pod \"57ba3855-be67-4359-b8c2-f62f45279695\" (UID: \"57ba3855-be67-4359-b8c2-f62f45279695\") " Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.085035 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz" (OuterVolumeSpecName: "kube-api-access-bs5bz") pod "25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" (UID: "25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e"). InnerVolumeSpecName "kube-api-access-bs5bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.085091 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb" (OuterVolumeSpecName: "kube-api-access-nknhb") pod "57ba3855-be67-4359-b8c2-f62f45279695" (UID: "57ba3855-be67-4359-b8c2-f62f45279695"). InnerVolumeSpecName "kube-api-access-nknhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.090429 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4" (OuterVolumeSpecName: "kube-api-access-wfgx4") pod "2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" (UID: "2f8480a1-4b59-4b93-8a6f-1c4d71a0b389"). InnerVolumeSpecName "kube-api-access-wfgx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.176927 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs5bz\" (UniqueName: \"kubernetes.io/projected/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e-kube-api-access-bs5bz\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.176975 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfgx4\" (UniqueName: \"kubernetes.io/projected/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389-kube-api-access-wfgx4\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.176987 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nknhb\" (UniqueName: \"kubernetes.io/projected/57ba3855-be67-4359-b8c2-f62f45279695-kube-api-access-nknhb\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.203370 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.526651 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g7bp5" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.527131 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g7bp5" event={"ID":"25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e","Type":"ContainerDied","Data":"ee3163a2b52118a9b34898b7492f46b67ba95fbcf1ce295b1440a32106a85ceb"} Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.527185 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee3163a2b52118a9b34898b7492f46b67ba95fbcf1ce295b1440a32106a85ceb" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.529675 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mbhc5" event={"ID":"2f8480a1-4b59-4b93-8a6f-1c4d71a0b389","Type":"ContainerDied","Data":"36fd41a64d49568b98ece566642b3f1ad1720a9167dcbc77e8ed00f6d371c3a3"} Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.529724 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fd41a64d49568b98ece566642b3f1ad1720a9167dcbc77e8ed00f6d371c3a3" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.529684 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mbhc5" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.532616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7sv8p" event={"ID":"57ba3855-be67-4359-b8c2-f62f45279695","Type":"ContainerDied","Data":"f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c"} Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.532661 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d0c7a3445e7c3aa488f561a25f4d6b391ba428fd8473d1babf343d5e1b835c" Oct 14 10:13:29 crc kubenswrapper[4698]: I1014 10:13:29.533246 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7sv8p" Oct 14 10:13:33 crc kubenswrapper[4698]: I1014 10:13:33.158629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:33 crc kubenswrapper[4698]: E1014 10:13:33.158927 4698 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 10:13:33 crc kubenswrapper[4698]: E1014 10:13:33.159736 4698 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 10:13:33 crc kubenswrapper[4698]: E1014 10:13:33.159857 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift podName:0ca6729c-82ca-4f89-b732-7154ec9224bb nodeName:}" failed. No retries permitted until 2025-10-14 10:13:49.159820611 +0000 UTC m=+1010.857120247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift") pod "swift-storage-0" (UID: "0ca6729c-82ca-4f89-b732-7154ec9224bb") : configmap "swift-ring-files" not found Oct 14 10:13:33 crc kubenswrapper[4698]: I1014 10:13:33.584188 4698 generic.go:334] "Generic (PLEG): container finished" podID="332a15eb-0ada-4f42-a34e-a7d2e9c46af2" containerID="fadd7648836a665675efac92f610557a586104c8e90f8926592a0d3867061a63" exitCode=0 Oct 14 10:13:33 crc kubenswrapper[4698]: I1014 10:13:33.584253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fmnnt" event={"ID":"332a15eb-0ada-4f42-a34e-a7d2e9c46af2","Type":"ContainerDied","Data":"fadd7648836a665675efac92f610557a586104c8e90f8926592a0d3867061a63"} Oct 14 10:13:34 crc kubenswrapper[4698]: I1014 10:13:34.946027 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114390 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114462 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114540 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114569 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbszq\" (UniqueName: \"kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114626 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.114962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf\") pod \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\" (UID: \"332a15eb-0ada-4f42-a34e-a7d2e9c46af2\") " Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.116259 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.116540 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.117547 4698 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.117600 4698 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.123963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq" (OuterVolumeSpecName: "kube-api-access-vbszq") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "kube-api-access-vbszq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.128524 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.139731 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts" (OuterVolumeSpecName: "scripts") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.145168 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.150345 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "332a15eb-0ada-4f42-a34e-a7d2e9c46af2" (UID: "332a15eb-0ada-4f42-a34e-a7d2e9c46af2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.220017 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.220124 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbszq\" (UniqueName: \"kubernetes.io/projected/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-kube-api-access-vbszq\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.220142 4698 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.220259 4698 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.220275 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/332a15eb-0ada-4f42-a34e-a7d2e9c46af2-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.611679 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fmnnt" event={"ID":"332a15eb-0ada-4f42-a34e-a7d2e9c46af2","Type":"ContainerDied","Data":"8670958afd1ee2d0046315247bc769ca5f66f24cbf38a30dca66edcbc986a854"} Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.611730 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8670958afd1ee2d0046315247bc769ca5f66f24cbf38a30dca66edcbc986a854" Oct 14 10:13:35 crc kubenswrapper[4698]: I1014 10:13:35.611733 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fmnnt" Oct 14 10:13:39 crc kubenswrapper[4698]: I1014 10:13:39.274869 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-24vqt" podUID="b64163c4-e040-4bec-a585-c55f9d05e948" containerName="ovn-controller" probeResult="failure" output=< Oct 14 10:13:39 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 10:13:39 crc kubenswrapper[4698]: > Oct 14 10:13:39 crc kubenswrapper[4698]: I1014 10:13:39.303783 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:13:42 crc kubenswrapper[4698]: I1014 10:13:42.685513 4698 generic.go:334] "Generic (PLEG): container finished" podID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerID="f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81" exitCode=0 Oct 14 10:13:42 crc kubenswrapper[4698]: I1014 10:13:42.685657 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerDied","Data":"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81"} Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.687362 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ea38-account-create-8crf4"] Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688285 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ba3855-be67-4359-b8c2-f62f45279695" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688307 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ba3855-be67-4359-b8c2-f62f45279695" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688342 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332a15eb-0ada-4f42-a34e-a7d2e9c46af2" containerName="swift-ring-rebalance" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688355 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="332a15eb-0ada-4f42-a34e-a7d2e9c46af2" containerName="swift-ring-rebalance" Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688372 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688385 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688447 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688459 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688483 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="init" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688497 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="init" Oct 14 10:13:43 crc kubenswrapper[4698]: E1014 10:13:43.688513 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="dnsmasq-dns" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688525 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="dnsmasq-dns" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688840 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688870 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688886 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="332a15eb-0ada-4f42-a34e-a7d2e9c46af2" containerName="swift-ring-rebalance" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688916 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ba3855-be67-4359-b8c2-f62f45279695" containerName="mariadb-database-create" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.688942 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ad8eb3-ebe0-46a4-9aa2-1fdd58dd4ea7" containerName="dnsmasq-dns" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.689819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.693101 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.698895 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerStarted","Data":"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed"} Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.700168 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.703344 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ea38-account-create-8crf4"] Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.748380 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.199818711 podStartE2EDuration="1m4.748278266s" podCreationTimestamp="2025-10-14 10:12:39 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.570701291 +0000 UTC m=+955.268000717" lastFinishedPulling="2025-10-14 10:13:05.119160816 +0000 UTC m=+966.816460272" observedRunningTime="2025-10-14 10:13:43.742674544 +0000 UTC m=+1005.439973970" watchObservedRunningTime="2025-10-14 10:13:43.748278266 +0000 UTC m=+1005.445577692" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.797491 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b82d\" (UniqueName: \"kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d\") pod \"keystone-ea38-account-create-8crf4\" (UID: \"3c6fe535-99c6-42e7-80b0-f28a65ab1778\") " pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.878698 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cefd-account-create-wz2jw"] Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.880082 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.883303 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.892076 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cefd-account-create-wz2jw"] Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.899479 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b82d\" (UniqueName: \"kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d\") pod \"keystone-ea38-account-create-8crf4\" (UID: \"3c6fe535-99c6-42e7-80b0-f28a65ab1778\") " pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:43 crc kubenswrapper[4698]: I1014 10:13:43.937883 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b82d\" (UniqueName: \"kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d\") pod \"keystone-ea38-account-create-8crf4\" (UID: \"3c6fe535-99c6-42e7-80b0-f28a65ab1778\") " pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.001540 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrkq9\" (UniqueName: \"kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9\") pod \"placement-cefd-account-create-wz2jw\" (UID: \"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1\") " pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.022622 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.103580 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrkq9\" (UniqueName: \"kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9\") pod \"placement-cefd-account-create-wz2jw\" (UID: \"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1\") " pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.131274 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrkq9\" (UniqueName: \"kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9\") pod \"placement-cefd-account-create-wz2jw\" (UID: \"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1\") " pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.198403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.236053 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b2d0-account-create-7xxwn"] Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.237496 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.241052 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.270502 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b2d0-account-create-7xxwn"] Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.317879 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-24vqt" podUID="b64163c4-e040-4bec-a585-c55f9d05e948" containerName="ovn-controller" probeResult="failure" output=< Oct 14 10:13:44 crc kubenswrapper[4698]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 10:13:44 crc kubenswrapper[4698]: > Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.339591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2cb6b" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.425298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdxq\" (UniqueName: \"kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq\") pod \"glance-b2d0-account-create-7xxwn\" (UID: \"0c2d4210-9870-40ae-b7e1-1569f4b92f37\") " pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.516850 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cefd-account-create-wz2jw"] Oct 14 10:13:44 crc kubenswrapper[4698]: W1014 10:13:44.520892 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99bc85e6_dfaa_48d5_9ec9_7ea615ba95e1.slice/crio-a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a WatchSource:0}: Error finding container a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a: Status 404 returned error can't find the container with id a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.528920 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdxq\" (UniqueName: \"kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq\") pod \"glance-b2d0-account-create-7xxwn\" (UID: \"0c2d4210-9870-40ae-b7e1-1569f4b92f37\") " pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.561578 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdxq\" (UniqueName: \"kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq\") pod \"glance-b2d0-account-create-7xxwn\" (UID: \"0c2d4210-9870-40ae-b7e1-1569f4b92f37\") " pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.576431 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-24vqt-config-z5gc6"] Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.577835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.583703 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.598851 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-24vqt-config-z5gc6"] Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.605624 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ea38-account-create-8crf4"] Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.617129 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.711121 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cefd-account-create-wz2jw" event={"ID":"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1","Type":"ContainerStarted","Data":"a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a"} Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.717374 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ea38-account-create-8crf4" event={"ID":"3c6fe535-99c6-42e7-80b0-f28a65ab1778","Type":"ContainerStarted","Data":"8c9082b342b776563f684291441ba9b3328a14e7eb2058bf6e4d81ac25d13d5f"} Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733248 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733369 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvc97\" (UniqueName: \"kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733409 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733571 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.733643 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835354 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835428 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvc97\" (UniqueName: \"kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835582 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835681 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.835714 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.836782 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.836879 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.837847 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.838471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.839124 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.870411 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvc97\" (UniqueName: \"kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97\") pod \"ovn-controller-24vqt-config-z5gc6\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:44 crc kubenswrapper[4698]: I1014 10:13:44.922440 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.109313 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b2d0-account-create-7xxwn"] Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.413684 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-24vqt-config-z5gc6"] Oct 14 10:13:45 crc kubenswrapper[4698]: W1014 10:13:45.479348 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1effda31_3b80_4385_b084_fe688dd6229e.slice/crio-7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb WatchSource:0}: Error finding container 7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb: Status 404 returned error can't find the container with id 7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.732261 4698 generic.go:334] "Generic (PLEG): container finished" podID="99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" containerID="aacbe0f99302dc3cd8c8c7403b69f08179a37a7d41a1b3fa01b7ce2e5fa95f69" exitCode=0 Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.732343 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cefd-account-create-wz2jw" event={"ID":"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1","Type":"ContainerDied","Data":"aacbe0f99302dc3cd8c8c7403b69f08179a37a7d41a1b3fa01b7ce2e5fa95f69"} Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.734627 4698 generic.go:334] "Generic (PLEG): container finished" podID="3c6fe535-99c6-42e7-80b0-f28a65ab1778" containerID="02efc2af2b8935ae233da8f34f1583c5629a61462dd3b683a0f47b8f97a12155" exitCode=0 Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.734680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ea38-account-create-8crf4" event={"ID":"3c6fe535-99c6-42e7-80b0-f28a65ab1778","Type":"ContainerDied","Data":"02efc2af2b8935ae233da8f34f1583c5629a61462dd3b683a0f47b8f97a12155"} Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.737187 4698 generic.go:334] "Generic (PLEG): container finished" podID="a710709f-1c22-4fff-b329-6d446917af01" containerID="02ab09bd1ef174e5d51d3059758a373cd337d02d2d18f0123578e5d49f2c0d75" exitCode=0 Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.737238 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerDied","Data":"02ab09bd1ef174e5d51d3059758a373cd337d02d2d18f0123578e5d49f2c0d75"} Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.742671 4698 generic.go:334] "Generic (PLEG): container finished" podID="0c2d4210-9870-40ae-b7e1-1569f4b92f37" containerID="3fe718146bae16c4b54c210f21529865e685a6bcfdba7079c8c8699c0645dfe8" exitCode=0 Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.742747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b2d0-account-create-7xxwn" event={"ID":"0c2d4210-9870-40ae-b7e1-1569f4b92f37","Type":"ContainerDied","Data":"3fe718146bae16c4b54c210f21529865e685a6bcfdba7079c8c8699c0645dfe8"} Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.742803 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b2d0-account-create-7xxwn" event={"ID":"0c2d4210-9870-40ae-b7e1-1569f4b92f37","Type":"ContainerStarted","Data":"0c63043bf0a8225ebceee98bbbc407e309975b59e4cfc11613612b1c9ea2f163"} Oct 14 10:13:45 crc kubenswrapper[4698]: I1014 10:13:45.746306 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-24vqt-config-z5gc6" event={"ID":"1effda31-3b80-4385-b084-fe688dd6229e","Type":"ContainerStarted","Data":"7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb"} Oct 14 10:13:46 crc kubenswrapper[4698]: I1014 10:13:46.757533 4698 generic.go:334] "Generic (PLEG): container finished" podID="1effda31-3b80-4385-b084-fe688dd6229e" containerID="f5f875344e0fdb8640ba85a1742236ef4f7c4e163ae348a014acc6c9f06b3ae2" exitCode=0 Oct 14 10:13:46 crc kubenswrapper[4698]: I1014 10:13:46.757628 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-24vqt-config-z5gc6" event={"ID":"1effda31-3b80-4385-b084-fe688dd6229e","Type":"ContainerDied","Data":"f5f875344e0fdb8640ba85a1742236ef4f7c4e163ae348a014acc6c9f06b3ae2"} Oct 14 10:13:46 crc kubenswrapper[4698]: I1014 10:13:46.763428 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerStarted","Data":"3c3f0afba5f4ef506546d4373fb1ab912750c618d3655c95fde1229b472d5d2b"} Oct 14 10:13:46 crc kubenswrapper[4698]: I1014 10:13:46.763936 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:13:46 crc kubenswrapper[4698]: I1014 10:13:46.865703 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.941728882 podStartE2EDuration="1m7.865659035s" podCreationTimestamp="2025-10-14 10:12:39 +0000 UTC" firstStartedPulling="2025-10-14 10:12:53.135179829 +0000 UTC m=+954.832479245" lastFinishedPulling="2025-10-14 10:13:06.059109972 +0000 UTC m=+967.756409398" observedRunningTime="2025-10-14 10:13:46.855228323 +0000 UTC m=+1008.552527749" watchObservedRunningTime="2025-10-14 10:13:46.865659035 +0000 UTC m=+1008.562958461" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.182330 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.287715 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdxq\" (UniqueName: \"kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq\") pod \"0c2d4210-9870-40ae-b7e1-1569f4b92f37\" (UID: \"0c2d4210-9870-40ae-b7e1-1569f4b92f37\") " Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.295115 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq" (OuterVolumeSpecName: "kube-api-access-rzdxq") pod "0c2d4210-9870-40ae-b7e1-1569f4b92f37" (UID: "0c2d4210-9870-40ae-b7e1-1569f4b92f37"). InnerVolumeSpecName "kube-api-access-rzdxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.336445 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.340447 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.389885 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrkq9\" (UniqueName: \"kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9\") pod \"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1\" (UID: \"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1\") " Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.390006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b82d\" (UniqueName: \"kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d\") pod \"3c6fe535-99c6-42e7-80b0-f28a65ab1778\" (UID: \"3c6fe535-99c6-42e7-80b0-f28a65ab1778\") " Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.390437 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdxq\" (UniqueName: \"kubernetes.io/projected/0c2d4210-9870-40ae-b7e1-1569f4b92f37-kube-api-access-rzdxq\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.394730 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9" (OuterVolumeSpecName: "kube-api-access-wrkq9") pod "99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" (UID: "99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1"). InnerVolumeSpecName "kube-api-access-wrkq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.405109 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d" (OuterVolumeSpecName: "kube-api-access-5b82d") pod "3c6fe535-99c6-42e7-80b0-f28a65ab1778" (UID: "3c6fe535-99c6-42e7-80b0-f28a65ab1778"). InnerVolumeSpecName "kube-api-access-5b82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.492747 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrkq9\" (UniqueName: \"kubernetes.io/projected/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1-kube-api-access-wrkq9\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.492821 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b82d\" (UniqueName: \"kubernetes.io/projected/3c6fe535-99c6-42e7-80b0-f28a65ab1778-kube-api-access-5b82d\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.779159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b2d0-account-create-7xxwn" event={"ID":"0c2d4210-9870-40ae-b7e1-1569f4b92f37","Type":"ContainerDied","Data":"0c63043bf0a8225ebceee98bbbc407e309975b59e4cfc11613612b1c9ea2f163"} Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.779692 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c63043bf0a8225ebceee98bbbc407e309975b59e4cfc11613612b1c9ea2f163" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.779172 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b2d0-account-create-7xxwn" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.782316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cefd-account-create-wz2jw" event={"ID":"99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1","Type":"ContainerDied","Data":"a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a"} Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.782472 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ba62b1b38884ffb0c445c688930557533e73e5c705bc2a06be5e3e4cb5d94a" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.782797 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cefd-account-create-wz2jw" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.784852 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ea38-account-create-8crf4" event={"ID":"3c6fe535-99c6-42e7-80b0-f28a65ab1778","Type":"ContainerDied","Data":"8c9082b342b776563f684291441ba9b3328a14e7eb2058bf6e4d81ac25d13d5f"} Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.784909 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ea38-account-create-8crf4" Oct 14 10:13:47 crc kubenswrapper[4698]: I1014 10:13:47.784920 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9082b342b776563f684291441ba9b3328a14e7eb2058bf6e4d81ac25d13d5f" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.070588 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.207842 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.207992 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208060 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run" (OuterVolumeSpecName: "var-run") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208098 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208114 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208173 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvc97\" (UniqueName: \"kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97\") pod \"1effda31-3b80-4385-b084-fe688dd6229e\" (UID: \"1effda31-3b80-4385-b084-fe688dd6229e\") " Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208654 4698 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208667 4698 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.209208 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.209506 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts" (OuterVolumeSpecName: "scripts") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.208284 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.226197 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97" (OuterVolumeSpecName: "kube-api-access-zvc97") pod "1effda31-3b80-4385-b084-fe688dd6229e" (UID: "1effda31-3b80-4385-b084-fe688dd6229e"). InnerVolumeSpecName "kube-api-access-zvc97". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.310262 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.310310 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvc97\" (UniqueName: \"kubernetes.io/projected/1effda31-3b80-4385-b084-fe688dd6229e-kube-api-access-zvc97\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.310323 4698 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1effda31-3b80-4385-b084-fe688dd6229e-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.310331 4698 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1effda31-3b80-4385-b084-fe688dd6229e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.796501 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-24vqt-config-z5gc6" event={"ID":"1effda31-3b80-4385-b084-fe688dd6229e","Type":"ContainerDied","Data":"7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb"} Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.796556 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-24vqt-config-z5gc6" Oct 14 10:13:48 crc kubenswrapper[4698]: I1014 10:13:48.796583 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ceb9ea76e8a3f3d21dbcb09945af7a2c72f4174f8f807fe9b59b2ff304226fb" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.210290 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-24vqt-config-z5gc6"] Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.217918 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-24vqt-config-z5gc6"] Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.227105 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.245278 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0ca6729c-82ca-4f89-b732-7154ec9224bb-etc-swift\") pod \"swift-storage-0\" (UID: \"0ca6729c-82ca-4f89-b732-7154ec9224bb\") " pod="openstack/swift-storage-0" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.262909 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-24vqt" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.351217 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508169 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rwbwr"] Oct 14 10:13:49 crc kubenswrapper[4698]: E1014 10:13:49.508653 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2d4210-9870-40ae-b7e1-1569f4b92f37" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508670 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2d4210-9870-40ae-b7e1-1569f4b92f37" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: E1014 10:13:49.508680 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1effda31-3b80-4385-b084-fe688dd6229e" containerName="ovn-config" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508687 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1effda31-3b80-4385-b084-fe688dd6229e" containerName="ovn-config" Oct 14 10:13:49 crc kubenswrapper[4698]: E1014 10:13:49.508710 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6fe535-99c6-42e7-80b0-f28a65ab1778" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508717 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6fe535-99c6-42e7-80b0-f28a65ab1778" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: E1014 10:13:49.508724 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508732 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508908 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2d4210-9870-40ae-b7e1-1569f4b92f37" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508920 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508930 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6fe535-99c6-42e7-80b0-f28a65ab1778" containerName="mariadb-account-create" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.508942 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1effda31-3b80-4385-b084-fe688dd6229e" containerName="ovn-config" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.511895 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.515299 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x9n8v" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.515504 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.534356 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rwbwr"] Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.636177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbhnw\" (UniqueName: \"kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.636293 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.636445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.636561 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.738348 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.738452 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.738543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.738665 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbhnw\" (UniqueName: \"kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.744158 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.744615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.746153 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.762373 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbhnw\" (UniqueName: \"kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw\") pod \"glance-db-sync-rwbwr\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:49 crc kubenswrapper[4698]: I1014 10:13:49.850919 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rwbwr" Oct 14 10:13:50 crc kubenswrapper[4698]: I1014 10:13:50.008724 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 10:13:50 crc kubenswrapper[4698]: I1014 10:13:50.430105 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rwbwr"] Oct 14 10:13:50 crc kubenswrapper[4698]: W1014 10:13:50.438219 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a71c98_4d2e_4aad_908d_0414cc8db1d7.slice/crio-0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054 WatchSource:0}: Error finding container 0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054: Status 404 returned error can't find the container with id 0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054 Oct 14 10:13:50 crc kubenswrapper[4698]: I1014 10:13:50.822075 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"3dd3c99c1903e02b1588ad6b08ac0a0bfb930c5ef6fd9c4b99e51bce23ce1e1e"} Oct 14 10:13:50 crc kubenswrapper[4698]: I1014 10:13:50.823361 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rwbwr" event={"ID":"53a71c98-4d2e-4aad-908d-0414cc8db1d7","Type":"ContainerStarted","Data":"0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054"} Oct 14 10:13:51 crc kubenswrapper[4698]: I1014 10:13:51.027625 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1effda31-3b80-4385-b084-fe688dd6229e" path="/var/lib/kubelet/pods/1effda31-3b80-4385-b084-fe688dd6229e/volumes" Oct 14 10:13:51 crc kubenswrapper[4698]: I1014 10:13:51.863124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"01a8fe6fac4998c2d0db6e82f238ceabf2e8e8d0ccbfb85bd4f1014c20ad34e6"} Oct 14 10:13:52 crc kubenswrapper[4698]: I1014 10:13:52.876883 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"306dfcbecfc3b54a27da6a0f0c2be70c14292dc6979fbbd51847be8c32ae1033"} Oct 14 10:13:52 crc kubenswrapper[4698]: I1014 10:13:52.877327 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"0224f23ba35c3488875c7a08e79cede1ceef4973ef8facaf68afa60441442889"} Oct 14 10:13:52 crc kubenswrapper[4698]: I1014 10:13:52.877342 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"042a80a3898af5ec97db29406c193dc0078534747eb90f32dbe1d8620ea568b8"} Oct 14 10:13:53 crc kubenswrapper[4698]: I1014 10:13:53.908423 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:13:53 crc kubenswrapper[4698]: I1014 10:13:53.908510 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:13:57 crc kubenswrapper[4698]: I1014 10:13:57.935662 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"edd93fbee30c5d6fe83f05589dfcb740ff7266d7358964eaf3de7d006b5d4c6d"} Oct 14 10:14:00 crc kubenswrapper[4698]: I1014 10:14:00.611367 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 10:14:00 crc kubenswrapper[4698]: I1014 10:14:00.857993 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:14:00 crc kubenswrapper[4698]: I1014 10:14:00.980541 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pgwrf"] Oct 14 10:14:00 crc kubenswrapper[4698]: I1014 10:14:00.981972 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:00 crc kubenswrapper[4698]: I1014 10:14:00.993015 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pgwrf"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.053401 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-hh2hz"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.054942 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.062472 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4ph\" (UniqueName: \"kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph\") pod \"cinder-db-create-pgwrf\" (UID: \"a734e759-8d40-4fd6-a208-93382019256b\") " pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.088469 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hh2hz"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.163920 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4ph\" (UniqueName: \"kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph\") pod \"cinder-db-create-pgwrf\" (UID: \"a734e759-8d40-4fd6-a208-93382019256b\") " pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.164147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9kc\" (UniqueName: \"kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc\") pod \"manila-db-create-hh2hz\" (UID: \"8b518c46-ed97-433e-81ea-457a3e6a19fd\") " pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.185679 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4ph\" (UniqueName: \"kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph\") pod \"cinder-db-create-pgwrf\" (UID: \"a734e759-8d40-4fd6-a208-93382019256b\") " pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.242296 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8ccwm"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.243464 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.252318 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8ccwm"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.272659 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd9kc\" (UniqueName: \"kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc\") pod \"manila-db-create-hh2hz\" (UID: \"8b518c46-ed97-433e-81ea-457a3e6a19fd\") " pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.311385 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd9kc\" (UniqueName: \"kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc\") pod \"manila-db-create-hh2hz\" (UID: \"8b518c46-ed97-433e-81ea-457a3e6a19fd\") " pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.323409 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mnqnk"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.325342 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.347893 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.348823 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.349731 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mrbzf" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.350133 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.357283 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.360534 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mnqnk"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.373275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.378368 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7mz\" (UniqueName: \"kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz\") pod \"barbican-db-create-8ccwm\" (UID: \"e4b63986-3e9d-4741-b06f-43c4932b286b\") " pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.448196 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8ccjb"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.450533 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.458144 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8ccjb"] Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.479948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.480076 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7mz\" (UniqueName: \"kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz\") pod \"barbican-db-create-8ccwm\" (UID: \"e4b63986-3e9d-4741-b06f-43c4932b286b\") " pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.480106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj24f\" (UniqueName: \"kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.480138 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.500332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7mz\" (UniqueName: \"kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz\") pod \"barbican-db-create-8ccwm\" (UID: \"e4b63986-3e9d-4741-b06f-43c4932b286b\") " pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.573475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.581834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.581937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47kxn\" (UniqueName: \"kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn\") pod \"neutron-db-create-8ccjb\" (UID: \"7130fceb-fafc-446f-be3a-01d71381b75f\") " pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.582034 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj24f\" (UniqueName: \"kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.582079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.588234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.589425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.604320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj24f\" (UniqueName: \"kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f\") pod \"keystone-db-sync-mnqnk\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.692057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47kxn\" (UniqueName: \"kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn\") pod \"neutron-db-create-8ccjb\" (UID: \"7130fceb-fafc-446f-be3a-01d71381b75f\") " pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.699250 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.723934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47kxn\" (UniqueName: \"kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn\") pod \"neutron-db-create-8ccjb\" (UID: \"7130fceb-fafc-446f-be3a-01d71381b75f\") " pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:01 crc kubenswrapper[4698]: I1014 10:14:01.770341 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:06 crc kubenswrapper[4698]: I1014 10:14:06.142532 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hh2hz"] Oct 14 10:14:06 crc kubenswrapper[4698]: I1014 10:14:06.154059 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pgwrf"] Oct 14 10:14:06 crc kubenswrapper[4698]: I1014 10:14:06.165828 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8ccwm"] Oct 14 10:14:06 crc kubenswrapper[4698]: W1014 10:14:06.171415 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4b63986_3e9d_4741_b06f_43c4932b286b.slice/crio-7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33 WatchSource:0}: Error finding container 7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33: Status 404 returned error can't find the container with id 7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33 Oct 14 10:14:06 crc kubenswrapper[4698]: I1014 10:14:06.173849 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mnqnk"] Oct 14 10:14:06 crc kubenswrapper[4698]: I1014 10:14:06.179254 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8ccjb"] Oct 14 10:14:06 crc kubenswrapper[4698]: W1014 10:14:06.189018 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b518c46_ed97_433e_81ea_457a3e6a19fd.slice/crio-f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294 WatchSource:0}: Error finding container f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294: Status 404 returned error can't find the container with id f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294 Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.049882 4698 generic.go:334] "Generic (PLEG): container finished" podID="a734e759-8d40-4fd6-a208-93382019256b" containerID="e3ba6222cb8d218f88eef3bb6abd80a45ecdec3b016ecb1a8dec4be7fb47dd4b" exitCode=0 Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.049934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgwrf" event={"ID":"a734e759-8d40-4fd6-a208-93382019256b","Type":"ContainerDied","Data":"e3ba6222cb8d218f88eef3bb6abd80a45ecdec3b016ecb1a8dec4be7fb47dd4b"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.050468 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgwrf" event={"ID":"a734e759-8d40-4fd6-a208-93382019256b","Type":"ContainerStarted","Data":"41f48576d4f07118e66e39118398a7f977f2a7298e81fbd63be67a8fd5a9b23c"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.052869 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mnqnk" event={"ID":"575ff60e-e52b-40bf-8429-ac5c464ed1ce","Type":"ContainerStarted","Data":"752e5353cd777453870d3015392b17c06077af27805ccb4dae4c72f2cab6584a"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.055484 4698 generic.go:334] "Generic (PLEG): container finished" podID="7130fceb-fafc-446f-be3a-01d71381b75f" containerID="8119cd8db487dac30f61f07f4f6a727ccc21078985996940a5f3e33beb45c984" exitCode=0 Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.055599 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ccjb" event={"ID":"7130fceb-fafc-446f-be3a-01d71381b75f","Type":"ContainerDied","Data":"8119cd8db487dac30f61f07f4f6a727ccc21078985996940a5f3e33beb45c984"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.055643 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ccjb" event={"ID":"7130fceb-fafc-446f-be3a-01d71381b75f","Type":"ContainerStarted","Data":"d900fbec4dde9dd72e5a5ff9001ab56e9c95d08ab96c386139f2f38c7eecc6e6"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.057798 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rwbwr" event={"ID":"53a71c98-4d2e-4aad-908d-0414cc8db1d7","Type":"ContainerStarted","Data":"ebab573d6e1e42ee0c9f9bc8801604653277e7071af495aed2bf2fdcb93bcc82"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.060411 4698 generic.go:334] "Generic (PLEG): container finished" podID="e4b63986-3e9d-4741-b06f-43c4932b286b" containerID="fac04a59e9c34c55331742431bff7be1a62d1bb8f98506c78592735b4822928a" exitCode=0 Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.060571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8ccwm" event={"ID":"e4b63986-3e9d-4741-b06f-43c4932b286b","Type":"ContainerDied","Data":"fac04a59e9c34c55331742431bff7be1a62d1bb8f98506c78592735b4822928a"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.060600 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8ccwm" event={"ID":"e4b63986-3e9d-4741-b06f-43c4932b286b","Type":"ContainerStarted","Data":"7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.074547 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"ddc25e932d99772c67ee712b4ebcd0b7ec0f836ebd6302bf09039c3c34f97265"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.074596 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"066fd99f0934ea66a0dd5cec1b094f1a4bd08800150e2e97a8442acc2a366d56"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.074606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"dd867de2d32dff00cebd8696edc92ed2167ddd3f64b39b8ff9a5df07e91d14f5"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.076897 4698 generic.go:334] "Generic (PLEG): container finished" podID="8b518c46-ed97-433e-81ea-457a3e6a19fd" containerID="3318db8fca5ac380f68901deacccf2d9292c6d18ee6bdda94c5972bcc822501e" exitCode=0 Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.076937 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hh2hz" event={"ID":"8b518c46-ed97-433e-81ea-457a3e6a19fd","Type":"ContainerDied","Data":"3318db8fca5ac380f68901deacccf2d9292c6d18ee6bdda94c5972bcc822501e"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.076958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hh2hz" event={"ID":"8b518c46-ed97-433e-81ea-457a3e6a19fd","Type":"ContainerStarted","Data":"f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294"} Oct 14 10:14:07 crc kubenswrapper[4698]: I1014 10:14:07.136679 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rwbwr" podStartSLOduration=3.026595343 podStartE2EDuration="18.136656251s" podCreationTimestamp="2025-10-14 10:13:49 +0000 UTC" firstStartedPulling="2025-10-14 10:13:50.440945674 +0000 UTC m=+1012.138245090" lastFinishedPulling="2025-10-14 10:14:05.551006582 +0000 UTC m=+1027.248305998" observedRunningTime="2025-10-14 10:14:07.120163394 +0000 UTC m=+1028.817462820" watchObservedRunningTime="2025-10-14 10:14:07.136656251 +0000 UTC m=+1028.833955667" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.136098 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8ccjb" event={"ID":"7130fceb-fafc-446f-be3a-01d71381b75f","Type":"ContainerDied","Data":"d900fbec4dde9dd72e5a5ff9001ab56e9c95d08ab96c386139f2f38c7eecc6e6"} Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.136969 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d900fbec4dde9dd72e5a5ff9001ab56e9c95d08ab96c386139f2f38c7eecc6e6" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.141507 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.141550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hh2hz" event={"ID":"8b518c46-ed97-433e-81ea-457a3e6a19fd","Type":"ContainerDied","Data":"f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294"} Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.141605 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b12a6858b7771031ad1cbdcf939b747b797895fc3c2f4229b244a5f16a0294" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.143570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pgwrf" event={"ID":"a734e759-8d40-4fd6-a208-93382019256b","Type":"ContainerDied","Data":"41f48576d4f07118e66e39118398a7f977f2a7298e81fbd63be67a8fd5a9b23c"} Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.143606 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f48576d4f07118e66e39118398a7f977f2a7298e81fbd63be67a8fd5a9b23c" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.144782 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8ccwm" event={"ID":"e4b63986-3e9d-4741-b06f-43c4932b286b","Type":"ContainerDied","Data":"7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33"} Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.144810 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2a66921238c7dc736d55ec49ea9bed7d949118074164f78115f3f807eefb33" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.144879 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8ccwm" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.189969 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.209481 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.220687 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.301926 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk4ph\" (UniqueName: \"kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph\") pod \"a734e759-8d40-4fd6-a208-93382019256b\" (UID: \"a734e759-8d40-4fd6-a208-93382019256b\") " Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.302079 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw7mz\" (UniqueName: \"kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz\") pod \"e4b63986-3e9d-4741-b06f-43c4932b286b\" (UID: \"e4b63986-3e9d-4741-b06f-43c4932b286b\") " Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.302121 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47kxn\" (UniqueName: \"kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn\") pod \"7130fceb-fafc-446f-be3a-01d71381b75f\" (UID: \"7130fceb-fafc-446f-be3a-01d71381b75f\") " Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.302197 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd9kc\" (UniqueName: \"kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc\") pod \"8b518c46-ed97-433e-81ea-457a3e6a19fd\" (UID: \"8b518c46-ed97-433e-81ea-457a3e6a19fd\") " Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.308864 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn" (OuterVolumeSpecName: "kube-api-access-47kxn") pod "7130fceb-fafc-446f-be3a-01d71381b75f" (UID: "7130fceb-fafc-446f-be3a-01d71381b75f"). InnerVolumeSpecName "kube-api-access-47kxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.309847 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph" (OuterVolumeSpecName: "kube-api-access-mk4ph") pod "a734e759-8d40-4fd6-a208-93382019256b" (UID: "a734e759-8d40-4fd6-a208-93382019256b"). InnerVolumeSpecName "kube-api-access-mk4ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.310507 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz" (OuterVolumeSpecName: "kube-api-access-mw7mz") pod "e4b63986-3e9d-4741-b06f-43c4932b286b" (UID: "e4b63986-3e9d-4741-b06f-43c4932b286b"). InnerVolumeSpecName "kube-api-access-mw7mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.318703 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc" (OuterVolumeSpecName: "kube-api-access-fd9kc") pod "8b518c46-ed97-433e-81ea-457a3e6a19fd" (UID: "8b518c46-ed97-433e-81ea-457a3e6a19fd"). InnerVolumeSpecName "kube-api-access-fd9kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.405876 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk4ph\" (UniqueName: \"kubernetes.io/projected/a734e759-8d40-4fd6-a208-93382019256b-kube-api-access-mk4ph\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.405915 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw7mz\" (UniqueName: \"kubernetes.io/projected/e4b63986-3e9d-4741-b06f-43c4932b286b-kube-api-access-mw7mz\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.405925 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47kxn\" (UniqueName: \"kubernetes.io/projected/7130fceb-fafc-446f-be3a-01d71381b75f-kube-api-access-47kxn\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:12 crc kubenswrapper[4698]: I1014 10:14:12.406811 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd9kc\" (UniqueName: \"kubernetes.io/projected/8b518c46-ed97-433e-81ea-457a3e6a19fd-kube-api-access-fd9kc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.156445 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mnqnk" event={"ID":"575ff60e-e52b-40bf-8429-ac5c464ed1ce","Type":"ContainerStarted","Data":"ab648946264351d69768f3d5b12848c87f0d49bd8222dbc16c71007d44fc23f0"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163824 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"a76e113c8d7cdf3de66efc89ac4ad71c5c71994cd255cc419a2dba3b4624bb56"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"33b60d419d41fcd9b4ea619c0c6874c7ca4446cdf59e4723c92ddc1fc6cde709"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"94f09f0173d43d5ae4eedb521970b0ea01fb44bc532a7b7239d2eb438215b4eb"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163893 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"e46b44c43cfc0f88a9b952ee5c083d8d0a70b035804b709ce3b8c9887e860bd5"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163903 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"25d39bae4ee81df520429970d023811f3eddcf3a483d41769fc8fb5605296847"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"0aa3efe5d3d2866c15ee2facdf01f080f31d8155cf2e2e0de98b750ceb91e48b"} Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163907 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hh2hz" Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163944 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pgwrf" Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.163895 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8ccjb" Oct 14 10:14:13 crc kubenswrapper[4698]: I1014 10:14:13.186330 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mnqnk" podStartSLOduration=6.331401965 podStartE2EDuration="12.186306703s" podCreationTimestamp="2025-10-14 10:14:01 +0000 UTC" firstStartedPulling="2025-10-14 10:14:06.159976341 +0000 UTC m=+1027.857275757" lastFinishedPulling="2025-10-14 10:14:12.014881079 +0000 UTC m=+1033.712180495" observedRunningTime="2025-10-14 10:14:13.174332286 +0000 UTC m=+1034.871631712" watchObservedRunningTime="2025-10-14 10:14:13.186306703 +0000 UTC m=+1034.883606119" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.191922 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0ca6729c-82ca-4f89-b732-7154ec9224bb","Type":"ContainerStarted","Data":"99a59de168dbdaae62068b55a4598faa9f0edb2af95dcb106ccdcf2336e334c5"} Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.197602 4698 generic.go:334] "Generic (PLEG): container finished" podID="53a71c98-4d2e-4aad-908d-0414cc8db1d7" containerID="ebab573d6e1e42ee0c9f9bc8801604653277e7071af495aed2bf2fdcb93bcc82" exitCode=0 Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.197719 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rwbwr" event={"ID":"53a71c98-4d2e-4aad-908d-0414cc8db1d7","Type":"ContainerDied","Data":"ebab573d6e1e42ee0c9f9bc8801604653277e7071af495aed2bf2fdcb93bcc82"} Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.237912 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.913521407 podStartE2EDuration="58.237887099s" podCreationTimestamp="2025-10-14 10:13:16 +0000 UTC" firstStartedPulling="2025-10-14 10:13:50.020635702 +0000 UTC m=+1011.717935118" lastFinishedPulling="2025-10-14 10:14:08.345001394 +0000 UTC m=+1030.042300810" observedRunningTime="2025-10-14 10:14:14.233636916 +0000 UTC m=+1035.930936342" watchObservedRunningTime="2025-10-14 10:14:14.237887099 +0000 UTC m=+1035.935186515" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.563284 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:14 crc kubenswrapper[4698]: E1014 10:14:14.563855 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a734e759-8d40-4fd6-a208-93382019256b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.563882 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a734e759-8d40-4fd6-a208-93382019256b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: E1014 10:14:14.563908 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b518c46-ed97-433e-81ea-457a3e6a19fd" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.563917 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b518c46-ed97-433e-81ea-457a3e6a19fd" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: E1014 10:14:14.563932 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b63986-3e9d-4741-b06f-43c4932b286b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.563941 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b63986-3e9d-4741-b06f-43c4932b286b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: E1014 10:14:14.563958 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7130fceb-fafc-446f-be3a-01d71381b75f" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.563967 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7130fceb-fafc-446f-be3a-01d71381b75f" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.564184 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b518c46-ed97-433e-81ea-457a3e6a19fd" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.564219 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a734e759-8d40-4fd6-a208-93382019256b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.564230 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7130fceb-fafc-446f-be3a-01d71381b75f" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.564244 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b63986-3e9d-4741-b06f-43c4932b286b" containerName="mariadb-database-create" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.566623 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.573575 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.577206 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.656721 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.656837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97tk\" (UniqueName: \"kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.656864 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.657822 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.657916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.658565 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.759902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.759993 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.760232 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.760269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97tk\" (UniqueName: \"kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.760289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.760330 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.761457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.761457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.761826 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.762121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.762221 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.781573 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97tk\" (UniqueName: \"kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk\") pod \"dnsmasq-dns-77585f5f8c-n6zt9\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:14 crc kubenswrapper[4698]: I1014 10:14:14.887594 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.443646 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.592318 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rwbwr" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.677311 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle\") pod \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.677492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data\") pod \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.677902 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbhnw\" (UniqueName: \"kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw\") pod \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.678083 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data\") pod \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\" (UID: \"53a71c98-4d2e-4aad-908d-0414cc8db1d7\") " Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.682665 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw" (OuterVolumeSpecName: "kube-api-access-mbhnw") pod "53a71c98-4d2e-4aad-908d-0414cc8db1d7" (UID: "53a71c98-4d2e-4aad-908d-0414cc8db1d7"). InnerVolumeSpecName "kube-api-access-mbhnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.682728 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "53a71c98-4d2e-4aad-908d-0414cc8db1d7" (UID: "53a71c98-4d2e-4aad-908d-0414cc8db1d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.715672 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a71c98-4d2e-4aad-908d-0414cc8db1d7" (UID: "53a71c98-4d2e-4aad-908d-0414cc8db1d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.723908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data" (OuterVolumeSpecName: "config-data") pod "53a71c98-4d2e-4aad-908d-0414cc8db1d7" (UID: "53a71c98-4d2e-4aad-908d-0414cc8db1d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.781439 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.781495 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.781514 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbhnw\" (UniqueName: \"kubernetes.io/projected/53a71c98-4d2e-4aad-908d-0414cc8db1d7-kube-api-access-mbhnw\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:15 crc kubenswrapper[4698]: I1014 10:14:15.781532 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53a71c98-4d2e-4aad-908d-0414cc8db1d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.228858 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rwbwr" event={"ID":"53a71c98-4d2e-4aad-908d-0414cc8db1d7","Type":"ContainerDied","Data":"0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054"} Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.228901 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rwbwr" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.228937 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b2cbdc0944590425bb233fbd6b8ffb6b3b810ff524b63c9a49e91c9aec3f054" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.231466 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerID="4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3" exitCode=0 Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.231577 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" event={"ID":"9d379d24-0836-4d45-ae16-4e72b32dff28","Type":"ContainerDied","Data":"4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3"} Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.231619 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" event={"ID":"9d379d24-0836-4d45-ae16-4e72b32dff28","Type":"ContainerStarted","Data":"ce07d020bc798f17ce1cce24d7c04e0381b3382d9543f2f3b1a6138989eacba2"} Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.233239 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mnqnk" event={"ID":"575ff60e-e52b-40bf-8429-ac5c464ed1ce","Type":"ContainerDied","Data":"ab648946264351d69768f3d5b12848c87f0d49bd8222dbc16c71007d44fc23f0"} Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.233180 4698 generic.go:334] "Generic (PLEG): container finished" podID="575ff60e-e52b-40bf-8429-ac5c464ed1ce" containerID="ab648946264351d69768f3d5b12848c87f0d49bd8222dbc16c71007d44fc23f0" exitCode=0 Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.675970 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.718585 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:16 crc kubenswrapper[4698]: E1014 10:14:16.719836 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a71c98-4d2e-4aad-908d-0414cc8db1d7" containerName="glance-db-sync" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.722784 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a71c98-4d2e-4aad-908d-0414cc8db1d7" containerName="glance-db-sync" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.723108 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a71c98-4d2e-4aad-908d-0414cc8db1d7" containerName="glance-db-sync" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.724267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.742164 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.809946 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.810005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.810031 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.810113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5mw\" (UniqueName: \"kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.810231 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.810268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912260 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912327 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb5mw\" (UniqueName: \"kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912446 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.912474 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.913734 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.913855 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.913926 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.914150 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.914428 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:16 crc kubenswrapper[4698]: I1014 10:14:16.933401 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb5mw\" (UniqueName: \"kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw\") pod \"dnsmasq-dns-7ff5475cc9-hqhm8\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.048646 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.265209 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" event={"ID":"9d379d24-0836-4d45-ae16-4e72b32dff28","Type":"ContainerStarted","Data":"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6"} Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.265317 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.296474 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" podStartSLOduration=3.296452186 podStartE2EDuration="3.296452186s" podCreationTimestamp="2025-10-14 10:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:17.28897932 +0000 UTC m=+1038.986278726" watchObservedRunningTime="2025-10-14 10:14:17.296452186 +0000 UTC m=+1038.993751602" Oct 14 10:14:17 crc kubenswrapper[4698]: W1014 10:14:17.510299 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c2867d_8760_4adb_99a7_720da1f16049.slice/crio-e2ec31ce7c0ef29ac280762981ff3b1663be0dc6e3cbd418d186fbcd6acc164b WatchSource:0}: Error finding container e2ec31ce7c0ef29ac280762981ff3b1663be0dc6e3cbd418d186fbcd6acc164b: Status 404 returned error can't find the container with id e2ec31ce7c0ef29ac280762981ff3b1663be0dc6e3cbd418d186fbcd6acc164b Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.512894 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.595803 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.730944 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle\") pod \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.731026 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj24f\" (UniqueName: \"kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f\") pod \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.731099 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data\") pod \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\" (UID: \"575ff60e-e52b-40bf-8429-ac5c464ed1ce\") " Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.740143 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f" (OuterVolumeSpecName: "kube-api-access-hj24f") pod "575ff60e-e52b-40bf-8429-ac5c464ed1ce" (UID: "575ff60e-e52b-40bf-8429-ac5c464ed1ce"). InnerVolumeSpecName "kube-api-access-hj24f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.793437 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "575ff60e-e52b-40bf-8429-ac5c464ed1ce" (UID: "575ff60e-e52b-40bf-8429-ac5c464ed1ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.821858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data" (OuterVolumeSpecName: "config-data") pod "575ff60e-e52b-40bf-8429-ac5c464ed1ce" (UID: "575ff60e-e52b-40bf-8429-ac5c464ed1ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.833276 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.833320 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj24f\" (UniqueName: \"kubernetes.io/projected/575ff60e-e52b-40bf-8429-ac5c464ed1ce-kube-api-access-hj24f\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:17 crc kubenswrapper[4698]: I1014 10:14:17.833336 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575ff60e-e52b-40bf-8429-ac5c464ed1ce-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.278808 4698 generic.go:334] "Generic (PLEG): container finished" podID="e7c2867d-8760-4adb-99a7-720da1f16049" containerID="c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d" exitCode=0 Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.279911 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" event={"ID":"e7c2867d-8760-4adb-99a7-720da1f16049","Type":"ContainerDied","Data":"c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d"} Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.279955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" event={"ID":"e7c2867d-8760-4adb-99a7-720da1f16049","Type":"ContainerStarted","Data":"e2ec31ce7c0ef29ac280762981ff3b1663be0dc6e3cbd418d186fbcd6acc164b"} Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.282574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mnqnk" event={"ID":"575ff60e-e52b-40bf-8429-ac5c464ed1ce","Type":"ContainerDied","Data":"752e5353cd777453870d3015392b17c06077af27805ccb4dae4c72f2cab6584a"} Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.282642 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mnqnk" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.282656 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752e5353cd777453870d3015392b17c06077af27805ccb4dae4c72f2cab6584a" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.283466 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="dnsmasq-dns" containerID="cri-o://b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6" gracePeriod=10 Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.536571 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c9bp9"] Oct 14 10:14:18 crc kubenswrapper[4698]: E1014 10:14:18.537552 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ff60e-e52b-40bf-8429-ac5c464ed1ce" containerName="keystone-db-sync" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.537570 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ff60e-e52b-40bf-8429-ac5c464ed1ce" containerName="keystone-db-sync" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.537891 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="575ff60e-e52b-40bf-8429-ac5c464ed1ce" containerName="keystone-db-sync" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.538990 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.555988 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.561365 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.561365 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.561721 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.561738 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mrbzf" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.589564 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c9bp9"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.634609 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.650280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.657861 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.657904 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.657932 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.657967 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9fb\" (UniqueName: \"kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.658016 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.658047 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.660565 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.748602 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.760838 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.760893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.760917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm48k\" (UniqueName: \"kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.760943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.760986 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761003 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761020 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761078 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761101 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.761160 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9fb\" (UniqueName: \"kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.769886 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.786193 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.786421 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5r56x" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.786534 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.786663 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.790893 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.791480 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.791502 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.792007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.808414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.830534 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.840887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9fb\" (UniqueName: \"kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb\") pod \"keystone-bootstrap-c9bp9\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.863843 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.863934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.863959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxf4\" (UniqueName: \"kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864030 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864056 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864103 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm48k\" (UniqueName: \"kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864218 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.864265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.865166 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.865870 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.866885 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.867899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.867948 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.874503 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.910709 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm48k\" (UniqueName: \"kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k\") pod \"dnsmasq-dns-5c5cc7c5ff-lj2n6\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.918827 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.920562 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.932371 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.933219 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.965934 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.966216 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.966331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.966458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.966565 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxf4\" (UniqueName: \"kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.967503 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.967834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.971934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:18 crc kubenswrapper[4698]: I1014 10:14:18.978144 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.002733 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.002925 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxf4\" (UniqueName: \"kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4\") pod \"horizon-6f9d769b87-j9282\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.072306 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.072391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.072442 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.072460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzl9\" (UniqueName: \"kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.072477 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.084348 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2qdqz"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.085781 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.087086 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.088671 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.089961 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nfpmv" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.090163 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.090269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.110838 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.112921 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.116724 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.134451 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.177909 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.198172 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.205394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.205742 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.205891 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.205949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206114 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206156 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206259 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206302 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzl9\" (UniqueName: \"kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206331 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206681 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9jgp\" (UniqueName: \"kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206814 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmbk\" (UniqueName: \"kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206850 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klf5\" (UniqueName: \"kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206872 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206924 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.206995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.207076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.207102 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.207146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.207216 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.207253 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.210206 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.211483 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.248304 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.250677 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.266777 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.317046 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2qdqz"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.317842 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.317907 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.317982 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318074 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318112 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g97tk\" (UniqueName: \"kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk\") pod \"9d379d24-0836-4d45-ae16-4e72b32dff28\" (UID: \"9d379d24-0836-4d45-ae16-4e72b32dff28\") " Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318661 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.318974 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319073 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319158 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319519 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.319657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9jgp\" (UniqueName: \"kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323275 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmbk\" (UniqueName: \"kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klf5\" (UniqueName: \"kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323384 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323412 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.323465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.334055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.335346 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.343398 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.344861 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.344994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.347542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.338104 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.356511 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.358256 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzl9\" (UniqueName: \"kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9\") pod \"horizon-67d675854f-5dgkt\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.368641 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.374593 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.375010 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.377111 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.378567 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.380182 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmbk\" (UniqueName: \"kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk\") pod \"dnsmasq-dns-8b5c85b87-f5kv7\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.382368 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9jgp\" (UniqueName: \"kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.384262 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="dnsmasq-dns" containerID="cri-o://2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70" gracePeriod=10 Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.384349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" event={"ID":"e7c2867d-8760-4adb-99a7-720da1f16049","Type":"ContainerStarted","Data":"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70"} Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.384391 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.391634 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk" (OuterVolumeSpecName: "kube-api-access-g97tk") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "kube-api-access-g97tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.394228 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.409708 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.410845 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data\") pod \"placement-db-sync-2qdqz\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413146 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413390 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerID="b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6" exitCode=0 Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413482 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" Oct 14 10:14:19 crc kubenswrapper[4698]: E1014 10:14:19.413555 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="dnsmasq-dns" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413576 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="dnsmasq-dns" Oct 14 10:14:19 crc kubenswrapper[4698]: E1014 10:14:19.413614 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="init" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413621 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="init" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.413953 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" containerName="dnsmasq-dns" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.415356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" event={"ID":"9d379d24-0836-4d45-ae16-4e72b32dff28","Type":"ContainerDied","Data":"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6"} Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.415389 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-n6zt9" event={"ID":"9d379d24-0836-4d45-ae16-4e72b32dff28","Type":"ContainerDied","Data":"ce07d020bc798f17ce1cce24d7c04e0381b3382d9543f2f3b1a6138989eacba2"} Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.415407 4698 scope.go:117] "RemoveContainer" containerID="b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.415630 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.417339 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.418166 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.418301 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.419304 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x9n8v" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.423494 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klf5\" (UniqueName: \"kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5\") pod \"ceilometer-0\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.425478 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g97tk\" (UniqueName: \"kubernetes.io/projected/9d379d24-0836-4d45-ae16-4e72b32dff28-kube-api-access-g97tk\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.433261 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.442688 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.463704 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.467779 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.468589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.482339 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.488143 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.514517 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.529113 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" podStartSLOduration=3.5290867930000003 podStartE2EDuration="3.529086793s" podCreationTimestamp="2025-10-14 10:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:19.411246605 +0000 UTC m=+1041.108546031" watchObservedRunningTime="2025-10-14 10:14:19.529086793 +0000 UTC m=+1041.226386209" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.535186 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.536258 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.578098 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.583967 4698 scope.go:117] "RemoveContainer" containerID="4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3" Oct 14 10:14:19 crc kubenswrapper[4698]: W1014 10:14:19.603150 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29fd1594_5c07_4283_bcb2_6ac29907c35c.slice/crio-1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa WatchSource:0}: Error finding container 1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa: Status 404 returned error can't find the container with id 1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.603241 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.603757 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config" (OuterVolumeSpecName: "config") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.611168 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d379d24-0836-4d45-ae16-4e72b32dff28" (UID: "9d379d24-0836-4d45-ae16-4e72b32dff28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.614042 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c9bp9"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.638393 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf42c\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641226 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fmg\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641271 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641353 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641374 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641389 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641412 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641431 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641494 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641537 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641587 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641639 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641649 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641658 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.641668 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d379d24-0836-4d45-ae16-4e72b32dff28-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.724102 4698 scope.go:117] "RemoveContainer" containerID="b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6" Oct 14 10:14:19 crc kubenswrapper[4698]: E1014 10:14:19.730844 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6\": container with ID starting with b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6 not found: ID does not exist" containerID="b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.730899 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6"} err="failed to get container status \"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6\": rpc error: code = NotFound desc = could not find container \"b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6\": container with ID starting with b5bddde3cecb54b811e8f62b9cc27542132cab97dfde43705f6de84bf0fb30b6 not found: ID does not exist" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.730954 4698 scope.go:117] "RemoveContainer" containerID="4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3" Oct 14 10:14:19 crc kubenswrapper[4698]: E1014 10:14:19.731968 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3\": container with ID starting with 4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3 not found: ID does not exist" containerID="4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.731992 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3"} err="failed to get container status \"4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3\": rpc error: code = NotFound desc = could not find container \"4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3\": container with ID starting with 4e3d073cbb28682c8d4e255cbc4cdd742125a0eacb3aa5c38de08ab205f6f1a3 not found: ID does not exist" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753584 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.753851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754152 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754186 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754234 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf42c\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fmg\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.754486 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.755705 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.756878 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.756991 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.759157 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.759700 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.766475 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.770977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.772976 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.774651 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.775369 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.777044 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.780833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.780897 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.791312 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fmg\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.806530 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf42c\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.819678 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.835475 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-n6zt9"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.863831 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.869970 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.877639 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:19 crc kubenswrapper[4698]: I1014 10:14:19.936268 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.106189 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.123220 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.222400 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.285438 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.483060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9bp9" event={"ID":"29fd1594-5c07-4283-bcb2-6ac29907c35c","Type":"ContainerStarted","Data":"1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.483942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.492493 4698 generic.go:334] "Generic (PLEG): container finished" podID="e7c2867d-8760-4adb-99a7-720da1f16049" containerID="2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70" exitCode=0 Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.492567 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" event={"ID":"e7c2867d-8760-4adb-99a7-720da1f16049","Type":"ContainerDied","Data":"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.492601 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" event={"ID":"e7c2867d-8760-4adb-99a7-720da1f16049","Type":"ContainerDied","Data":"e2ec31ce7c0ef29ac280762981ff3b1663be0dc6e3cbd418d186fbcd6acc164b"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.492618 4698 scope.go:117] "RemoveContainer" containerID="2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.492782 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hqhm8" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.504866 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb5mw\" (UniqueName: \"kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.504967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.505107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.505202 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.505238 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb\") pod \"e7c2867d-8760-4adb-99a7-720da1f16049\" (UID: \"e7c2867d-8760-4adb-99a7-720da1f16049\") " Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.508982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d675854f-5dgkt" event={"ID":"367b799b-362f-491f-8bb4-58d617a09769","Type":"ContainerStarted","Data":"42f07e78e6918310dce107ccb32db631ecf2ab5609354216158939ee6bf33541"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.512636 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9d769b87-j9282" event={"ID":"f4f888e1-886b-4ab1-8c5d-e0894bf1e065","Type":"ContainerStarted","Data":"9040d27862c07cb326367535ab9a0c5edd8d0f76cc5bf4b0321ba2e9e6021ed6"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.519083 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" event={"ID":"73499f8b-9988-4c5a-a049-e0f842b02370","Type":"ContainerStarted","Data":"422eadce0d259bf047b4fe22f97ac70050da51be6990566f3b50563aec369656"} Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.519276 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" podUID="73499f8b-9988-4c5a-a049-e0f842b02370" containerName="init" containerID="cri-o://1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f" gracePeriod=10 Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.544716 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2qdqz"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.545322 4698 scope.go:117] "RemoveContainer" containerID="c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.548170 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw" (OuterVolumeSpecName: "kube-api-access-lb5mw") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "kube-api-access-lb5mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.618401 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb5mw\" (UniqueName: \"kubernetes.io/projected/e7c2867d-8760-4adb-99a7-720da1f16049-kube-api-access-lb5mw\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.627317 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.632578 4698 scope.go:117] "RemoveContainer" containerID="2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70" Oct 14 10:14:20 crc kubenswrapper[4698]: E1014 10:14:20.633009 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70\": container with ID starting with 2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70 not found: ID does not exist" containerID="2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.633044 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70"} err="failed to get container status \"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70\": rpc error: code = NotFound desc = could not find container \"2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70\": container with ID starting with 2a2cfe2cdbb9b735a25f0745b597e058d1c920b92cac74fdeb5a0a8ab032df70 not found: ID does not exist" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.633073 4698 scope.go:117] "RemoveContainer" containerID="c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.634541 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.633019 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config" (OuterVolumeSpecName: "config") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: E1014 10:14:20.645096 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d\": container with ID starting with c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d not found: ID does not exist" containerID="c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.645151 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d"} err="failed to get container status \"c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d\": rpc error: code = NotFound desc = could not find container \"c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d\": container with ID starting with c282ecb3f12837b972ef9d22eaeb2066b46bf8d37694f65b6d4791b246eeb11d not found: ID does not exist" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.650174 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.651418 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.655414 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7c2867d-8760-4adb-99a7-720da1f16049" (UID: "e7c2867d-8760-4adb-99a7-720da1f16049"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.721434 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.721624 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.721717 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.721790 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.721891 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7c2867d-8760-4adb-99a7-720da1f16049-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.840163 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.861460 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:20 crc kubenswrapper[4698]: I1014 10:14:20.868270 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hqhm8"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.035823 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d379d24-0836-4d45-ae16-4e72b32dff28" path="/var/lib/kubelet/pods/9d379d24-0836-4d45-ae16-4e72b32dff28/volumes" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.036936 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" path="/var/lib/kubelet/pods/e7c2867d-8760-4adb-99a7-720da1f16049/volumes" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.090530 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.137246 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-5da3-account-create-z7lfc"] Oct 14 10:14:21 crc kubenswrapper[4698]: E1014 10:14:21.138912 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="init" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.138992 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="init" Oct 14 10:14:21 crc kubenswrapper[4698]: E1014 10:14:21.139076 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="dnsmasq-dns" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.139131 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="dnsmasq-dns" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.139380 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c2867d-8760-4adb-99a7-720da1f16049" containerName="dnsmasq-dns" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.140487 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.146856 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.150327 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-5da3-account-create-z7lfc"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.237060 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjc7d\" (UniqueName: \"kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d\") pod \"manila-5da3-account-create-z7lfc\" (UID: \"674b9a2a-192d-4f43-b2c8-bfb55a2775fe\") " pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.301063 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.340464 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.340704 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.340838 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.341059 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm48k\" (UniqueName: \"kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.341128 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.341232 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc\") pod \"73499f8b-9988-4c5a-a049-e0f842b02370\" (UID: \"73499f8b-9988-4c5a-a049-e0f842b02370\") " Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.349330 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjc7d\" (UniqueName: \"kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d\") pod \"manila-5da3-account-create-z7lfc\" (UID: \"674b9a2a-192d-4f43-b2c8-bfb55a2775fe\") " pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.379096 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18d1-account-create-bjrtb"] Oct 14 10:14:21 crc kubenswrapper[4698]: E1014 10:14:21.379560 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73499f8b-9988-4c5a-a049-e0f842b02370" containerName="init" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.379577 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="73499f8b-9988-4c5a-a049-e0f842b02370" containerName="init" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.379752 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k" (OuterVolumeSpecName: "kube-api-access-xm48k") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "kube-api-access-xm48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.379807 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="73499f8b-9988-4c5a-a049-e0f842b02370" containerName="init" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.380439 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.387402 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjc7d\" (UniqueName: \"kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d\") pod \"manila-5da3-account-create-z7lfc\" (UID: \"674b9a2a-192d-4f43-b2c8-bfb55a2775fe\") " pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.389093 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.400310 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18d1-account-create-bjrtb"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.402947 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.404750 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.426885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config" (OuterVolumeSpecName: "config") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.451965 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcsxw\" (UniqueName: \"kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw\") pod \"cinder-18d1-account-create-bjrtb\" (UID: \"ae8ed93f-2876-4954-89f6-e169a445631d\") " pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.452387 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.452488 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.452558 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm48k\" (UniqueName: \"kubernetes.io/projected/73499f8b-9988-4c5a-a049-e0f842b02370-kube-api-access-xm48k\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.453385 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.453508 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.459010 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73499f8b-9988-4c5a-a049-e0f842b02370" (UID: "73499f8b-9988-4c5a-a049-e0f842b02370"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.465256 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-83ed-account-create-jdm2f"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.467030 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.473189 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.486322 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-83ed-account-create-jdm2f"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.554194 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerID="a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5" exitCode=0 Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.555017 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" event={"ID":"ca4258ec-6a3b-414c-9556-4ce7c99349bd","Type":"ContainerDied","Data":"a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.555068 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" event={"ID":"ca4258ec-6a3b-414c-9556-4ce7c99349bd","Type":"ContainerStarted","Data":"89c98fb9abe660157d5943b56afe4d5d61b541a11daaeb1a0c3327451bff110a"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.555623 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcsxw\" (UniqueName: \"kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw\") pod \"cinder-18d1-account-create-bjrtb\" (UID: \"ae8ed93f-2876-4954-89f6-e169a445631d\") " pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.555714 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwbc\" (UniqueName: \"kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc\") pod \"barbican-83ed-account-create-jdm2f\" (UID: \"b1dfd964-476d-40ea-942f-0f2ef2a6314f\") " pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.558711 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.563645 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.564047 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73499f8b-9988-4c5a-a049-e0f842b02370-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.595476 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.596849 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.647368 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcsxw\" (UniqueName: \"kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw\") pod \"cinder-18d1-account-create-bjrtb\" (UID: \"ae8ed93f-2876-4954-89f6-e169a445631d\") " pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.637241 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.651815 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2qdqz" event={"ID":"19d4fb38-f09b-4383-adfc-12bb06107bfb","Type":"ContainerStarted","Data":"9b2fd36f3a5c712dac7eb00e41120b0e29f8e750ed08898aeef71d2b3fe2296d"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.651936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.665494 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwbc\" (UniqueName: \"kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc\") pod \"barbican-83ed-account-create-jdm2f\" (UID: \"b1dfd964-476d-40ea-942f-0f2ef2a6314f\") " pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.674328 4698 generic.go:334] "Generic (PLEG): container finished" podID="73499f8b-9988-4c5a-a049-e0f842b02370" containerID="1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f" exitCode=0 Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.674425 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" event={"ID":"73499f8b-9988-4c5a-a049-e0f842b02370","Type":"ContainerDied","Data":"422eadce0d259bf047b4fe22f97ac70050da51be6990566f3b50563aec369656"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.674455 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" event={"ID":"73499f8b-9988-4c5a-a049-e0f842b02370","Type":"ContainerDied","Data":"1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.674471 4698 scope.go:117] "RemoveContainer" containerID="1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.674557 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.693960 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwbc\" (UniqueName: \"kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc\") pod \"barbican-83ed-account-create-jdm2f\" (UID: \"b1dfd964-476d-40ea-942f-0f2ef2a6314f\") " pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.705610 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.706110 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerStarted","Data":"7eed8b9c87eb272ad1caec3b1e6dfa86d1ca855d22850ed9a873c86851ce357b"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.744429 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9bp9" event={"ID":"29fd1594-5c07-4283-bcb2-6ac29907c35c","Type":"ContainerStarted","Data":"883639fa3508d2924a6bfc2cb0fdbdd2dee926e8b7c81c063de63f0ac38f5194"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.762197 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.763015 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.764440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerStarted","Data":"43f2f2cdb68cb2d05d90553f370337a68c12d5e203f35da0b4a5827e64ea9a75"} Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.770991 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.771082 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhqzf\" (UniqueName: \"kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.771104 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.771146 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.771178 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.792297 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.823024 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.831249 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c711-account-create-sh8sx"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.850293 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.857148 4698 scope.go:117] "RemoveContainer" containerID="1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.860304 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 14 10:14:21 crc kubenswrapper[4698]: E1014 10:14:21.869277 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f\": container with ID starting with 1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f not found: ID does not exist" containerID="1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.869347 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f"} err="failed to get container status \"1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f\": rpc error: code = NotFound desc = could not find container \"1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f\": container with ID starting with 1aee8a8f22bf8e50d8e786138d868a06d355de1ad4845d201ee10c440cad074f not found: ID does not exist" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.869416 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c711-account-create-sh8sx"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873076 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhqzf\" (UniqueName: \"kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873119 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873167 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrnr\" (UniqueName: \"kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr\") pod \"neutron-c711-account-create-sh8sx\" (UID: \"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e\") " pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.873381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.886264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.887414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.889605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.891536 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhqzf\" (UniqueName: \"kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.891823 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key\") pod \"horizon-58986b5dd5-xvhvn\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.910187 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:21 crc kubenswrapper[4698]: W1014 10:14:21.924167 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod535a7b3a_eda4_4540_b96a_e3bac0fb3a16.slice/crio-6c2445a7eccf6e384123fb4ec6fb8c2c1fce7e6acd1cad128174495a41b4987b WatchSource:0}: Error finding container 6c2445a7eccf6e384123fb4ec6fb8c2c1fce7e6acd1cad128174495a41b4987b: Status 404 returned error can't find the container with id 6c2445a7eccf6e384123fb4ec6fb8c2c1fce7e6acd1cad128174495a41b4987b Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.933057 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-lj2n6"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.943501 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c9bp9" podStartSLOduration=3.94348497 podStartE2EDuration="3.94348497s" podCreationTimestamp="2025-10-14 10:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:21.801085977 +0000 UTC m=+1043.498385393" watchObservedRunningTime="2025-10-14 10:14:21.94348497 +0000 UTC m=+1043.640784386" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.957946 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.975206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvrnr\" (UniqueName: \"kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr\") pod \"neutron-c711-account-create-sh8sx\" (UID: \"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e\") " pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:21 crc kubenswrapper[4698]: I1014 10:14:21.992687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvrnr\" (UniqueName: \"kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr\") pod \"neutron-c711-account-create-sh8sx\" (UID: \"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e\") " pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:21.994725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.188619 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.510359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-5da3-account-create-z7lfc"] Oct 14 10:14:22 crc kubenswrapper[4698]: W1014 10:14:22.537863 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod674b9a2a_192d_4f43_b2c8_bfb55a2775fe.slice/crio-a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182 WatchSource:0}: Error finding container a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182: Status 404 returned error can't find the container with id a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182 Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.706071 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18d1-account-create-bjrtb"] Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.725387 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-83ed-account-create-jdm2f"] Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.753444 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.837680 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerStarted","Data":"6c2445a7eccf6e384123fb4ec6fb8c2c1fce7e6acd1cad128174495a41b4987b"} Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.850900 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerStarted","Data":"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd"} Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.854729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" event={"ID":"ca4258ec-6a3b-414c-9556-4ce7c99349bd","Type":"ContainerStarted","Data":"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa"} Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.856240 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.881792 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-5da3-account-create-z7lfc" event={"ID":"674b9a2a-192d-4f43-b2c8-bfb55a2775fe","Type":"ContainerStarted","Data":"a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182"} Oct 14 10:14:22 crc kubenswrapper[4698]: I1014 10:14:22.926950 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" podStartSLOduration=4.926932007 podStartE2EDuration="4.926932007s" podCreationTimestamp="2025-10-14 10:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:22.906236703 +0000 UTC m=+1044.603536139" watchObservedRunningTime="2025-10-14 10:14:22.926932007 +0000 UTC m=+1044.624231423" Oct 14 10:14:23 crc kubenswrapper[4698]: W1014 10:14:23.041741 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1dfd964_476d_40ea_942f_0f2ef2a6314f.slice/crio-57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba WatchSource:0}: Error finding container 57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba: Status 404 returned error can't find the container with id 57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.043569 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73499f8b-9988-4c5a-a049-e0f842b02370" path="/var/lib/kubelet/pods/73499f8b-9988-4c5a-a049-e0f842b02370/volumes" Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.080812 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c711-account-create-sh8sx"] Oct 14 10:14:23 crc kubenswrapper[4698]: W1014 10:14:23.138911 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ce16763_8bd4_4dfc_a5cb_975622a0bb5e.slice/crio-bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06 WatchSource:0}: Error finding container bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06: Status 404 returned error can't find the container with id bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.901654 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerStarted","Data":"4835b9008550f1a016dd3c5100a7df4337deffad22e763ed4a322110e155d6ff"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.904231 4698 generic.go:334] "Generic (PLEG): container finished" podID="ae8ed93f-2876-4954-89f6-e169a445631d" containerID="cd53c12dd7f654493c82f3fdcfb6921fa68fb99468cd6449474b7eefb0b3cd72" exitCode=0 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.904299 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18d1-account-create-bjrtb" event={"ID":"ae8ed93f-2876-4954-89f6-e169a445631d","Type":"ContainerDied","Data":"cd53c12dd7f654493c82f3fdcfb6921fa68fb99468cd6449474b7eefb0b3cd72"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.904368 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18d1-account-create-bjrtb" event={"ID":"ae8ed93f-2876-4954-89f6-e169a445631d","Type":"ContainerStarted","Data":"cdd0bef9b0593b9d859951c7ba2072763451e847c49eb4c04092357f87453375"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.907669 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.907729 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.907794 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.908714 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.908799 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2" gracePeriod=600 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.910433 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerStarted","Data":"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.910584 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-httpd" containerID="cri-o://9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" gracePeriod=30 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.910574 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-log" containerID="cri-o://ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" gracePeriod=30 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.927540 4698 generic.go:334] "Generic (PLEG): container finished" podID="0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" containerID="7c0ce52fbf295c5fcd33ffb01f39e076b2c095c359acf1596be11ed8efe5e173" exitCode=0 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.927740 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c711-account-create-sh8sx" event={"ID":"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e","Type":"ContainerDied","Data":"7c0ce52fbf295c5fcd33ffb01f39e076b2c095c359acf1596be11ed8efe5e173"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.927803 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c711-account-create-sh8sx" event={"ID":"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e","Type":"ContainerStarted","Data":"bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.939130 4698 generic.go:334] "Generic (PLEG): container finished" podID="674b9a2a-192d-4f43-b2c8-bfb55a2775fe" containerID="bf3474ee1154961d298a74c907031a1ad35c67f5be7bf603ba590b32a6b34985" exitCode=0 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.939195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-5da3-account-create-z7lfc" event={"ID":"674b9a2a-192d-4f43-b2c8-bfb55a2775fe","Type":"ContainerDied","Data":"bf3474ee1154961d298a74c907031a1ad35c67f5be7bf603ba590b32a6b34985"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.950325 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.950302172 podStartE2EDuration="4.950302172s" podCreationTimestamp="2025-10-14 10:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:23.935553279 +0000 UTC m=+1045.632852695" watchObservedRunningTime="2025-10-14 10:14:23.950302172 +0000 UTC m=+1045.647601588" Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.951086 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1dfd964-476d-40ea-942f-0f2ef2a6314f" containerID="c1d7114f97e2cc88c1ad000f67176aff33c47e2aebbed4a23c4ab10c754c6022" exitCode=0 Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.951201 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-83ed-account-create-jdm2f" event={"ID":"b1dfd964-476d-40ea-942f-0f2ef2a6314f","Type":"ContainerDied","Data":"c1d7114f97e2cc88c1ad000f67176aff33c47e2aebbed4a23c4ab10c754c6022"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.951236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-83ed-account-create-jdm2f" event={"ID":"b1dfd964-476d-40ea-942f-0f2ef2a6314f","Type":"ContainerStarted","Data":"57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba"} Oct 14 10:14:23 crc kubenswrapper[4698]: I1014 10:14:23.957031 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerStarted","Data":"9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681"} Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.711517 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776437 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776531 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776581 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776600 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776628 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776645 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.776679 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5fmg\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg\") pod \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\" (UID: \"eb6f43ee-215b-4fdb-b298-7df52d8ebd92\") " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.777105 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.777442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs" (OuterVolumeSpecName: "logs") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.777966 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.777983 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.783726 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.787133 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts" (OuterVolumeSpecName: "scripts") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.792902 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph" (OuterVolumeSpecName: "ceph") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.794022 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg" (OuterVolumeSpecName: "kube-api-access-f5fmg") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "kube-api-access-f5fmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.831281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.879317 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.879345 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.879355 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.879365 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5fmg\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-kube-api-access-f5fmg\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.879375 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.884181 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data" (OuterVolumeSpecName: "config-data") pod "eb6f43ee-215b-4fdb-b298-7df52d8ebd92" (UID: "eb6f43ee-215b-4fdb-b298-7df52d8ebd92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.904373 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.973115 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerStarted","Data":"27d3071cf41f37dfef03099440b31ebbd8a582899e065896b23df38bc756e8d1"} Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.973281 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-log" containerID="cri-o://9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681" gracePeriod=30 Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.974001 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-httpd" containerID="cri-o://27d3071cf41f37dfef03099440b31ebbd8a582899e065896b23df38bc756e8d1" gracePeriod=30 Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987528 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2" exitCode=0 Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987594 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2"} Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987624 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b"} Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987640 4698 scope.go:117] "RemoveContainer" containerID="026bd43a3644ff6f93d5e8e267ea83431aafa74f0511660ce40aba31e77b93d7" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987849 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:24 crc kubenswrapper[4698]: I1014 10:14:24.987870 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6f43ee-215b-4fdb-b298-7df52d8ebd92-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.004977 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.004958917 podStartE2EDuration="6.004958917s" podCreationTimestamp="2025-10-14 10:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:25.00300067 +0000 UTC m=+1046.700300096" watchObservedRunningTime="2025-10-14 10:14:25.004958917 +0000 UTC m=+1046.702258333" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008036 4698 generic.go:334] "Generic (PLEG): container finished" podID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerID="9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" exitCode=0 Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008070 4698 generic.go:334] "Generic (PLEG): container finished" podID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerID="ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" exitCode=143 Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008191 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerDied","Data":"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602"} Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerDied","Data":"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd"} Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008271 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.008276 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb6f43ee-215b-4fdb-b298-7df52d8ebd92","Type":"ContainerDied","Data":"43f2f2cdb68cb2d05d90553f370337a68c12d5e203f35da0b4a5827e64ea9a75"} Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.091311 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.103227 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.110432 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:25 crc kubenswrapper[4698]: E1014 10:14:25.110890 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-log" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.110906 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-log" Oct 14 10:14:25 crc kubenswrapper[4698]: E1014 10:14:25.110940 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-httpd" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.110947 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-httpd" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.111135 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-log" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.111151 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" containerName="glance-httpd" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.125285 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.127363 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.128313 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190708 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190756 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85s2q\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190823 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190901 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190928 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.190973 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.191034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.191096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: E1014 10:14:25.234664 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod535a7b3a_eda4_4540_b96a_e3bac0fb3a16.slice/crio-9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.293912 4698 scope.go:117] "RemoveContainer" containerID="9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295048 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295286 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295314 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85s2q\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.295413 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.297446 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.301821 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.301832 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.303064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.307468 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.315421 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.325697 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.333583 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85s2q\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.337750 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.423251 4698 scope.go:117] "RemoveContainer" containerID="ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.478757 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.530689 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.601013 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcsxw\" (UniqueName: \"kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw\") pod \"ae8ed93f-2876-4954-89f6-e169a445631d\" (UID: \"ae8ed93f-2876-4954-89f6-e169a445631d\") " Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.605594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw" (OuterVolumeSpecName: "kube-api-access-hcsxw") pod "ae8ed93f-2876-4954-89f6-e169a445631d" (UID: "ae8ed93f-2876-4954-89f6-e169a445631d"). InnerVolumeSpecName "kube-api-access-hcsxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:25 crc kubenswrapper[4698]: I1014 10:14:25.710104 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcsxw\" (UniqueName: \"kubernetes.io/projected/ae8ed93f-2876-4954-89f6-e169a445631d-kube-api-access-hcsxw\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.058332 4698 generic.go:334] "Generic (PLEG): container finished" podID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerID="27d3071cf41f37dfef03099440b31ebbd8a582899e065896b23df38bc756e8d1" exitCode=0 Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.058361 4698 generic.go:334] "Generic (PLEG): container finished" podID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerID="9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681" exitCode=143 Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.058414 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerDied","Data":"27d3071cf41f37dfef03099440b31ebbd8a582899e065896b23df38bc756e8d1"} Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.058440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerDied","Data":"9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681"} Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.062897 4698 generic.go:334] "Generic (PLEG): container finished" podID="29fd1594-5c07-4283-bcb2-6ac29907c35c" containerID="883639fa3508d2924a6bfc2cb0fdbdd2dee926e8b7c81c063de63f0ac38f5194" exitCode=0 Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.062935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9bp9" event={"ID":"29fd1594-5c07-4283-bcb2-6ac29907c35c","Type":"ContainerDied","Data":"883639fa3508d2924a6bfc2cb0fdbdd2dee926e8b7c81c063de63f0ac38f5194"} Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.065199 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18d1-account-create-bjrtb" event={"ID":"ae8ed93f-2876-4954-89f6-e169a445631d","Type":"ContainerDied","Data":"cdd0bef9b0593b9d859951c7ba2072763451e847c49eb4c04092357f87453375"} Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.065237 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd0bef9b0593b9d859951c7ba2072763451e847c49eb4c04092357f87453375" Oct 14 10:14:26 crc kubenswrapper[4698]: I1014 10:14:26.065283 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18d1-account-create-bjrtb" Oct 14 10:14:27 crc kubenswrapper[4698]: I1014 10:14:27.032232 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6f43ee-215b-4fdb-b298-7df52d8ebd92" path="/var/lib/kubelet/pods/eb6f43ee-215b-4fdb-b298-7df52d8ebd92/volumes" Oct 14 10:14:27 crc kubenswrapper[4698]: I1014 10:14:27.745403 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.483133 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575040 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575188 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575220 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575419 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf42c\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575483 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575502 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.575551 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs\") pod \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\" (UID: \"535a7b3a-eda4-4540-b96a-e3bac0fb3a16\") " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.576456 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs" (OuterVolumeSpecName: "logs") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.580536 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.584007 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph" (OuterVolumeSpecName: "ceph") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.588501 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.588700 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c" (OuterVolumeSpecName: "kube-api-access-nf42c") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "kube-api-access-nf42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.598475 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts" (OuterVolumeSpecName: "scripts") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.622257 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data" (OuterVolumeSpecName: "config-data") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.648027 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "535a7b3a-eda4-4540-b96a-e3bac0fb3a16" (UID: "535a7b3a-eda4-4540-b96a-e3bac0fb3a16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680392 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680461 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680478 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680492 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680504 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680516 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680525 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.680536 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf42c\" (UniqueName: \"kubernetes.io/projected/535a7b3a-eda4-4540-b96a-e3bac0fb3a16-kube-api-access-nf42c\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.720212 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 14 10:14:28 crc kubenswrapper[4698]: I1014 10:14:28.782160 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.173065 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.173105 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"535a7b3a-eda4-4540-b96a-e3bac0fb3a16","Type":"ContainerDied","Data":"6c2445a7eccf6e384123fb4ec6fb8c2c1fce7e6acd1cad128174495a41b4987b"} Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.219229 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.236405 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.253150 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:29 crc kubenswrapper[4698]: E1014 10:14:29.253894 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8ed93f-2876-4954-89f6-e169a445631d" containerName="mariadb-account-create" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.253908 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8ed93f-2876-4954-89f6-e169a445631d" containerName="mariadb-account-create" Oct 14 10:14:29 crc kubenswrapper[4698]: E1014 10:14:29.253926 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-log" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.253932 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-log" Oct 14 10:14:29 crc kubenswrapper[4698]: E1014 10:14:29.253948 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-httpd" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.253956 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-httpd" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.254163 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-httpd" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.254178 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8ed93f-2876-4954-89f6-e169a445631d" containerName="mariadb-account-create" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.254186 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" containerName="glance-log" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.256409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.263663 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.264082 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.264132 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397090 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397196 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397241 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397308 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397360 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397399 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkzt\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.397470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.500671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.500788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.500836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.500907 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501492 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501789 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501833 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkzt\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501919 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.501991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.502014 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.502255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.511362 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.513426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.515737 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.516380 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.517517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.525983 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkzt\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.536902 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.539827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.594715 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.643600 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.643854 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-9n6ld" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" containerID="cri-o://c795647333c10b386e2f260c64304f0cb354b12fd52757a250b2ba2949df1ebf" gracePeriod=10 Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.672385 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.710946 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.713274 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.717395 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.731581 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.806242 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.848202 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cf95ddffb-6h2bm"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.850848 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.857968 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cf95ddffb-6h2bm"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.886948 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.912713 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7f6s\" (UniqueName: \"kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.912802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.912863 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.912885 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.912921 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.914655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:29 crc kubenswrapper[4698]: I1014 10:14:29.914733 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016348 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-config-data\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016452 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-combined-ca-bundle\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016547 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mtn\" (UniqueName: \"kubernetes.io/projected/746d0a6a-4df6-40b6-9600-63ec14336507-kube-api-access-64mtn\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016575 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016590 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-tls-certs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016630 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016649 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-secret-key\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016672 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-scripts\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746d0a6a-4df6-40b6-9600-63ec14336507-logs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016721 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7f6s\" (UniqueName: \"kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.016740 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.017620 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.018534 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.018956 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.022921 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.023358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.023430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.042097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7f6s\" (UniqueName: \"kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s\") pod \"horizon-7b567dfd5d-nvwrp\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.058178 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118513 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-config-data\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118638 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-combined-ca-bundle\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118688 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mtn\" (UniqueName: \"kubernetes.io/projected/746d0a6a-4df6-40b6-9600-63ec14336507-kube-api-access-64mtn\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118725 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-tls-certs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-secret-key\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118828 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-scripts\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.118865 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746d0a6a-4df6-40b6-9600-63ec14336507-logs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.119457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/746d0a6a-4df6-40b6-9600-63ec14336507-logs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.120562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-config-data\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.127090 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/746d0a6a-4df6-40b6-9600-63ec14336507-scripts\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.128610 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-secret-key\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.128865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-horizon-tls-certs\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.129164 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746d0a6a-4df6-40b6-9600-63ec14336507-combined-ca-bundle\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.138409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mtn\" (UniqueName: \"kubernetes.io/projected/746d0a6a-4df6-40b6-9600-63ec14336507-kube-api-access-64mtn\") pod \"horizon-6cf95ddffb-6h2bm\" (UID: \"746d0a6a-4df6-40b6-9600-63ec14336507\") " pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.174411 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.189119 4698 generic.go:334] "Generic (PLEG): container finished" podID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerID="c795647333c10b386e2f260c64304f0cb354b12fd52757a250b2ba2949df1ebf" exitCode=0 Oct 14 10:14:30 crc kubenswrapper[4698]: I1014 10:14:30.189169 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9n6ld" event={"ID":"414ba38b-6cfb-48ae-b818-6f8544558bf1","Type":"ContainerDied","Data":"c795647333c10b386e2f260c64304f0cb354b12fd52757a250b2ba2949df1ebf"} Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.031593 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535a7b3a-eda4-4540-b96a-e3bac0fb3a16" path="/var/lib/kubelet/pods/535a7b3a-eda4-4540-b96a-e3bac0fb3a16/volumes" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.584572 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9n6ld" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.590670 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-thrh8"] Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.597053 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.602963 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-thrh8"] Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.606894 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.607115 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.607594 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-q7jcm" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668297 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668355 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668866 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.668897 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89d95\" (UniqueName: \"kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.747265 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.756835 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.766186 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770122 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwbc\" (UniqueName: \"kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc\") pod \"b1dfd964-476d-40ea-942f-0f2ef2a6314f\" (UID: \"b1dfd964-476d-40ea-942f-0f2ef2a6314f\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770250 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvrnr\" (UniqueName: \"kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr\") pod \"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e\" (UID: \"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770691 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770774 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770796 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89d95\" (UniqueName: \"kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770851 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770889 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.770977 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.780859 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc" (OuterVolumeSpecName: "kube-api-access-cqwbc") pod "b1dfd964-476d-40ea-942f-0f2ef2a6314f" (UID: "b1dfd964-476d-40ea-942f-0f2ef2a6314f"). InnerVolumeSpecName "kube-api-access-cqwbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.784154 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.786005 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.786151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.788800 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.789884 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.797381 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89d95\" (UniqueName: \"kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95\") pod \"cinder-db-sync-thrh8\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.798851 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr" (OuterVolumeSpecName: "kube-api-access-kvrnr") pod "0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" (UID: "0ce16763-8bd4-4dfc-a5cb-975622a0bb5e"). InnerVolumeSpecName "kube-api-access-kvrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.872986 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjc7d\" (UniqueName: \"kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d\") pod \"674b9a2a-192d-4f43-b2c8-bfb55a2775fe\" (UID: \"674b9a2a-192d-4f43-b2c8-bfb55a2775fe\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx9fb\" (UniqueName: \"kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873168 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873198 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873243 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.873532 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle\") pod \"29fd1594-5c07-4283-bcb2-6ac29907c35c\" (UID: \"29fd1594-5c07-4283-bcb2-6ac29907c35c\") " Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.874050 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwbc\" (UniqueName: \"kubernetes.io/projected/b1dfd964-476d-40ea-942f-0f2ef2a6314f-kube-api-access-cqwbc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.874068 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvrnr\" (UniqueName: \"kubernetes.io/projected/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e-kube-api-access-kvrnr\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.876791 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts" (OuterVolumeSpecName: "scripts") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.878837 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.878902 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb" (OuterVolumeSpecName: "kube-api-access-sx9fb") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "kube-api-access-sx9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.879362 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d" (OuterVolumeSpecName: "kube-api-access-qjc7d") pod "674b9a2a-192d-4f43-b2c8-bfb55a2775fe" (UID: "674b9a2a-192d-4f43-b2c8-bfb55a2775fe"). InnerVolumeSpecName "kube-api-access-qjc7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.883309 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.901630 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.918355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data" (OuterVolumeSpecName: "config-data") pod "29fd1594-5c07-4283-bcb2-6ac29907c35c" (UID: "29fd1594-5c07-4283-bcb2-6ac29907c35c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.921949 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-thrh8" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982560 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjc7d\" (UniqueName: \"kubernetes.io/projected/674b9a2a-192d-4f43-b2c8-bfb55a2775fe-kube-api-access-qjc7d\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982599 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx9fb\" (UniqueName: \"kubernetes.io/projected/29fd1594-5c07-4283-bcb2-6ac29907c35c-kube-api-access-sx9fb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982611 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982621 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982631 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982640 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:31 crc kubenswrapper[4698]: I1014 10:14:31.982649 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fd1594-5c07-4283-bcb2-6ac29907c35c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.215910 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c711-account-create-sh8sx" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.216123 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c711-account-create-sh8sx" event={"ID":"0ce16763-8bd4-4dfc-a5cb-975622a0bb5e","Type":"ContainerDied","Data":"bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06"} Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.217277 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf5c410c2a4cacb8cc5f5969cd3047259da0bcd10859fad094c4f90efdbb3f06" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.223518 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-5da3-account-create-z7lfc" event={"ID":"674b9a2a-192d-4f43-b2c8-bfb55a2775fe","Type":"ContainerDied","Data":"a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182"} Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.223592 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a050f5afc8acaa6a0031debbd8aa8618baebcb4ce6cbb4dfe469b10314784182" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.223610 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-5da3-account-create-z7lfc" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.229526 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-83ed-account-create-jdm2f" event={"ID":"b1dfd964-476d-40ea-942f-0f2ef2a6314f","Type":"ContainerDied","Data":"57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba"} Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.229567 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57aae29392e5e662de5b67c4ed27f02dd9673d62a04b6262e576c3c5bc6625ba" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.229632 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-83ed-account-create-jdm2f" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.236140 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9bp9" event={"ID":"29fd1594-5c07-4283-bcb2-6ac29907c35c","Type":"ContainerDied","Data":"1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa"} Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.236272 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3a5cc47518a6febb1b197c93d8d765f4ef40af9ba8b18e30b74dcc297eddfa" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.236289 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9bp9" Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.878838 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c9bp9"] Oct 14 10:14:32 crc kubenswrapper[4698]: I1014 10:14:32.885253 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c9bp9"] Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.133821 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29fd1594-5c07-4283-bcb2-6ac29907c35c" path="/var/lib/kubelet/pods/29fd1594-5c07-4283-bcb2-6ac29907c35c/volumes" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.137862 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xjff6"] Oct 14 10:14:33 crc kubenswrapper[4698]: E1014 10:14:33.160653 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29fd1594-5c07-4283-bcb2-6ac29907c35c" containerName="keystone-bootstrap" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.160700 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="29fd1594-5c07-4283-bcb2-6ac29907c35c" containerName="keystone-bootstrap" Oct 14 10:14:33 crc kubenswrapper[4698]: E1014 10:14:33.160721 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.160728 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: E1014 10:14:33.160776 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b9a2a-192d-4f43-b2c8-bfb55a2775fe" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.160784 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b9a2a-192d-4f43-b2c8-bfb55a2775fe" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: E1014 10:14:33.160803 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1dfd964-476d-40ea-942f-0f2ef2a6314f" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.160809 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1dfd964-476d-40ea-942f-0f2ef2a6314f" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.161054 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.161077 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1dfd964-476d-40ea-942f-0f2ef2a6314f" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.161085 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="674b9a2a-192d-4f43-b2c8-bfb55a2775fe" containerName="mariadb-account-create" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.161096 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="29fd1594-5c07-4283-bcb2-6ac29907c35c" containerName="keystone-bootstrap" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.162167 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xjff6"] Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.162257 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.167348 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mrbzf" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.167372 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.167812 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.185688 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.271902 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.271960 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.272021 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.272066 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.272096 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.272129 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js74g\" (UniqueName: \"kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373800 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373836 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373859 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.373883 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js74g\" (UniqueName: \"kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.381120 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.382055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.382096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.382878 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.383469 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.391132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js74g\" (UniqueName: \"kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g\") pod \"keystone-bootstrap-xjff6\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:33 crc kubenswrapper[4698]: I1014 10:14:33.508739 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.450373 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-qfgt5"] Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.455664 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.463274 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-6b2gn" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.466854 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.483927 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-qfgt5"] Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.584701 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9n6ld" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.656031 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.656242 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.656270 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqbtr\" (UniqueName: \"kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.656303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.758521 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.758651 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.758677 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqbtr\" (UniqueName: \"kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.758706 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.766899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.767270 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.768425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.784015 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqbtr\" (UniqueName: \"kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr\") pod \"manila-db-sync-qfgt5\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.808283 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-qfgt5" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.858032 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nbmlr"] Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.859571 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.860456 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.860614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6kt\" (UniqueName: \"kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.860659 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.862699 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dt4rj" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.863424 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.889032 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nbmlr"] Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.972536 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.972649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6kt\" (UniqueName: \"kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.972682 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.981138 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9c4js"] Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.982792 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.986020 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.986252 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.986538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.987204 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2k2km" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.987472 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 10:14:36 crc kubenswrapper[4698]: I1014 10:14:36.994482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6kt\" (UniqueName: \"kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt\") pod \"barbican-db-sync-nbmlr\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.005033 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9c4js"] Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.178668 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.179501 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.179852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxb2b\" (UniqueName: \"kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.189951 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.269320 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.269520 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h598hb5h95hfdh659h65hcfh68dhf4h8fh687h686h5d6hbbhb5hcfh5b6h55bhc4h657h598h66dh584h695h64h586h646h5b4h6fh55hb6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwxf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f9d769b87-j9282_openstack(f4f888e1-886b-4ab1-8c5d-e0894bf1e065): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.281140 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f9d769b87-j9282" podUID="f4f888e1-886b-4ab1-8c5d-e0894bf1e065" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.282692 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.282846 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxb2b\" (UniqueName: \"kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.283090 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.289995 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.290419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.307242 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxb2b\" (UniqueName: \"kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b\") pod \"neutron-db-sync-9c4js\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.332493 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.332745 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nddhbbh5f6h5f7h59bh5dh688h74hf6h88h648hfdh59ch5d9h67dh68fh9bhbdhfbh5bbh5bdh5dbh585h658h67ch5dfhbdh64bh5b5h5b8h5f5h685q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgzl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67d675854f-5dgkt_openstack(367b799b-362f-491f-8bb4-58d617a09769): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.341431 4698 scope.go:117] "RemoveContainer" containerID="9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.341432 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-67d675854f-5dgkt" podUID="367b799b-362f-491f-8bb4-58d617a09769" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.342213 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602\": container with ID starting with 9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602 not found: ID does not exist" containerID="9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.342241 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602"} err="failed to get container status \"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602\": rpc error: code = NotFound desc = could not find container \"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602\": container with ID starting with 9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602 not found: ID does not exist" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.342260 4698 scope.go:117] "RemoveContainer" containerID="ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" Oct 14 10:14:37 crc kubenswrapper[4698]: E1014 10:14:37.342867 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd\": container with ID starting with ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd not found: ID does not exist" containerID="ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.342912 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd"} err="failed to get container status \"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd\": rpc error: code = NotFound desc = could not find container \"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd\": container with ID starting with ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd not found: ID does not exist" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.342948 4698 scope.go:117] "RemoveContainer" containerID="9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.343383 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602"} err="failed to get container status \"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602\": rpc error: code = NotFound desc = could not find container \"9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602\": container with ID starting with 9191bed1aa7b8af4aa629794ea040c7cb50ef49572e2a2f671a22fc6da780602 not found: ID does not exist" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.343404 4698 scope.go:117] "RemoveContainer" containerID="ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.343674 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd"} err="failed to get container status \"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd\": rpc error: code = NotFound desc = could not find container \"ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd\": container with ID starting with ae0390bf5e25ddaab490c245cb31d134dcbcfa295390eb1373a9591ff99c8bdd not found: ID does not exist" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.343696 4698 scope.go:117] "RemoveContainer" containerID="27d3071cf41f37dfef03099440b31ebbd8a582899e065896b23df38bc756e8d1" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.391482 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c4js" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.530085 4698 scope.go:117] "RemoveContainer" containerID="9355c8664fa4675a54ec1986e00d34ed899603233a4c6f4c4bd3ee8f8a7ac681" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.648118 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.792620 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn546\" (UniqueName: \"kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546\") pod \"414ba38b-6cfb-48ae-b818-6f8544558bf1\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.792711 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb\") pod \"414ba38b-6cfb-48ae-b818-6f8544558bf1\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.792745 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb\") pod \"414ba38b-6cfb-48ae-b818-6f8544558bf1\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.792786 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc\") pod \"414ba38b-6cfb-48ae-b818-6f8544558bf1\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.792870 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config\") pod \"414ba38b-6cfb-48ae-b818-6f8544558bf1\" (UID: \"414ba38b-6cfb-48ae-b818-6f8544558bf1\") " Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.815041 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546" (OuterVolumeSpecName: "kube-api-access-kn546") pod "414ba38b-6cfb-48ae-b818-6f8544558bf1" (UID: "414ba38b-6cfb-48ae-b818-6f8544558bf1"). InnerVolumeSpecName "kube-api-access-kn546". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.856788 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "414ba38b-6cfb-48ae-b818-6f8544558bf1" (UID: "414ba38b-6cfb-48ae-b818-6f8544558bf1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.884330 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "414ba38b-6cfb-48ae-b818-6f8544558bf1" (UID: "414ba38b-6cfb-48ae-b818-6f8544558bf1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.898858 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn546\" (UniqueName: \"kubernetes.io/projected/414ba38b-6cfb-48ae-b818-6f8544558bf1-kube-api-access-kn546\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.898883 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.898896 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.900182 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "414ba38b-6cfb-48ae-b818-6f8544558bf1" (UID: "414ba38b-6cfb-48ae-b818-6f8544558bf1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:37 crc kubenswrapper[4698]: I1014 10:14:37.920933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config" (OuterVolumeSpecName: "config") pod "414ba38b-6cfb-48ae-b818-6f8544558bf1" (UID: "414ba38b-6cfb-48ae-b818-6f8544558bf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.000465 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.000574 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414ba38b-6cfb-48ae-b818-6f8544558bf1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.151948 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.292014 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:14:38 crc kubenswrapper[4698]: W1014 10:14:38.296971 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee140165_8d8d_426c_b33f_5803bb0a7ad1.slice/crio-901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961 WatchSource:0}: Error finding container 901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961: Status 404 returned error can't find the container with id 901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961 Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.305397 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xjff6"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.313317 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerStarted","Data":"5d738c6fa016f4c382ae548edd0c96d7f3832727c5cba761ed5948bcbcebfdc2"} Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.339575 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerStarted","Data":"72d5c35add8dacbd016dbaa9022b4a42eb5852c123f9e5005edf0df08b10bc79"} Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.350093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9n6ld" event={"ID":"414ba38b-6cfb-48ae-b818-6f8544558bf1","Type":"ContainerDied","Data":"c67d968e5fc4171575050fd317cbe40aa82715a02641d30ef7cefa664e8368b5"} Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.350148 4698 scope.go:117] "RemoveContainer" containerID="c795647333c10b386e2f260c64304f0cb354b12fd52757a250b2ba2949df1ebf" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.350258 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9n6ld" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.361192 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2qdqz" event={"ID":"19d4fb38-f09b-4383-adfc-12bb06107bfb","Type":"ContainerStarted","Data":"42bf3c73ba3327efa826fffd6a5a8544f7e45d4ea8dd0cddfc97faf1402c257d"} Oct 14 10:14:38 crc kubenswrapper[4698]: W1014 10:14:38.367110 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30712ba4_9217_4276_b576_798bfd319b45.slice/crio-863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9 WatchSource:0}: Error finding container 863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9: Status 404 returned error can't find the container with id 863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9 Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.367997 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerStarted","Data":"6fd7fbdc25686461684050be990ac23c6ba734aef7ef4a8e8e8c71372471edcf"} Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.416103 4698 scope.go:117] "RemoveContainer" containerID="3b367c288626c156bc47c704b646c9112c0b0e5a79bb11340b8697a51beb034c" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.427345 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2qdqz" podStartSLOduration=3.6526796 podStartE2EDuration="20.427320819s" podCreationTimestamp="2025-10-14 10:14:18 +0000 UTC" firstStartedPulling="2025-10-14 10:14:20.63258838 +0000 UTC m=+1042.329887796" lastFinishedPulling="2025-10-14 10:14:37.407229599 +0000 UTC m=+1059.104529015" observedRunningTime="2025-10-14 10:14:38.379846164 +0000 UTC m=+1060.077145610" watchObservedRunningTime="2025-10-14 10:14:38.427320819 +0000 UTC m=+1060.124620255" Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.467860 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.515126 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.533606 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9n6ld"] Oct 14 10:14:38 crc kubenswrapper[4698]: W1014 10:14:38.553014 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2abd1f71_b2d4_4c95_898c_bcfe99b2acf5.slice/crio-b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a WatchSource:0}: Error finding container b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a: Status 404 returned error can't find the container with id b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a Oct 14 10:14:38 crc kubenswrapper[4698]: W1014 10:14:38.632709 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ea6a4b_4e73_4a81_bb25_8ae62bf7daa9.slice/crio-f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335 WatchSource:0}: Error finding container f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335: Status 404 returned error can't find the container with id f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335 Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.656090 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-thrh8"] Oct 14 10:14:38 crc kubenswrapper[4698]: W1014 10:14:38.671266 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod746d0a6a_4df6_40b6_9600_63ec14336507.slice/crio-ad948d24f36d9d15803356f4b2cad2c5b9397a0e653b3f070fc6857e113779a0 WatchSource:0}: Error finding container ad948d24f36d9d15803356f4b2cad2c5b9397a0e653b3f070fc6857e113779a0: Status 404 returned error can't find the container with id ad948d24f36d9d15803356f4b2cad2c5b9397a0e653b3f070fc6857e113779a0 Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.676258 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9c4js"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.714818 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nbmlr"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.738549 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cf95ddffb-6h2bm"] Oct 14 10:14:38 crc kubenswrapper[4698]: I1014 10:14:38.752889 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-qfgt5"] Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.036951 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" path="/var/lib/kubelet/pods/414ba38b-6cfb-48ae-b818-6f8544558bf1/volumes" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.122077 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.156215 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.244624 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxf4\" (UniqueName: \"kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4\") pod \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.244684 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key\") pod \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.244889 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts\") pod \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.244980 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data\") pod \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.245069 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs\") pod \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\" (UID: \"f4f888e1-886b-4ab1-8c5d-e0894bf1e065\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.246330 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts" (OuterVolumeSpecName: "scripts") pod "f4f888e1-886b-4ab1-8c5d-e0894bf1e065" (UID: "f4f888e1-886b-4ab1-8c5d-e0894bf1e065"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.247266 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data" (OuterVolumeSpecName: "config-data") pod "f4f888e1-886b-4ab1-8c5d-e0894bf1e065" (UID: "f4f888e1-886b-4ab1-8c5d-e0894bf1e065"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.247364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs" (OuterVolumeSpecName: "logs") pod "f4f888e1-886b-4ab1-8c5d-e0894bf1e065" (UID: "f4f888e1-886b-4ab1-8c5d-e0894bf1e065"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.266065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4" (OuterVolumeSpecName: "kube-api-access-wwxf4") pod "f4f888e1-886b-4ab1-8c5d-e0894bf1e065" (UID: "f4f888e1-886b-4ab1-8c5d-e0894bf1e065"). InnerVolumeSpecName "kube-api-access-wwxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.336040 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f4f888e1-886b-4ab1-8c5d-e0894bf1e065" (UID: "f4f888e1-886b-4ab1-8c5d-e0894bf1e065"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts\") pod \"367b799b-362f-491f-8bb4-58d617a09769\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346601 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data\") pod \"367b799b-362f-491f-8bb4-58d617a09769\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346639 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs\") pod \"367b799b-362f-491f-8bb4-58d617a09769\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346804 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key\") pod \"367b799b-362f-491f-8bb4-58d617a09769\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzl9\" (UniqueName: \"kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9\") pod \"367b799b-362f-491f-8bb4-58d617a09769\" (UID: \"367b799b-362f-491f-8bb4-58d617a09769\") " Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.346979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs" (OuterVolumeSpecName: "logs") pod "367b799b-362f-491f-8bb4-58d617a09769" (UID: "367b799b-362f-491f-8bb4-58d617a09769"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347217 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data" (OuterVolumeSpecName: "config-data") pod "367b799b-362f-491f-8bb4-58d617a09769" (UID: "367b799b-362f-491f-8bb4-58d617a09769"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347491 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347509 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347522 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367b799b-362f-491f-8bb4-58d617a09769-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347533 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347544 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347558 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxf4\" (UniqueName: \"kubernetes.io/projected/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-kube-api-access-wwxf4\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.347573 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4f888e1-886b-4ab1-8c5d-e0894bf1e065-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.348381 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts" (OuterVolumeSpecName: "scripts") pod "367b799b-362f-491f-8bb4-58d617a09769" (UID: "367b799b-362f-491f-8bb4-58d617a09769"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.367032 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "367b799b-362f-491f-8bb4-58d617a09769" (UID: "367b799b-362f-491f-8bb4-58d617a09769"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.380690 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerStarted","Data":"393471ee803b2f6bdb94dbb502c32fa759670f44814d5f995e9836fa400b1b05"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.380741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerStarted","Data":"901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.382833 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67d675854f-5dgkt" event={"ID":"367b799b-362f-491f-8bb4-58d617a09769","Type":"ContainerDied","Data":"42f07e78e6918310dce107ccb32db631ecf2ab5609354216158939ee6bf33541"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.382960 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67d675854f-5dgkt" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.384876 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9" (OuterVolumeSpecName: "kube-api-access-tgzl9") pod "367b799b-362f-491f-8bb4-58d617a09769" (UID: "367b799b-362f-491f-8bb4-58d617a09769"). InnerVolumeSpecName "kube-api-access-tgzl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.402747 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nbmlr" event={"ID":"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9","Type":"ContainerStarted","Data":"f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.406093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-thrh8" event={"ID":"d90a3be7-6827-427d-9ed1-3aef79542b6d","Type":"ContainerStarted","Data":"24165701a8b96f6ae9ba2d2394d8e77ae275b7d55dcb3a16d3845f01c767eaec"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.408632 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerStarted","Data":"d17e9d19220636464e26901b2afb9198d920c057b8c93058739a04d660e37984"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.408789 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58986b5dd5-xvhvn" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon-log" containerID="cri-o://72d5c35add8dacbd016dbaa9022b4a42eb5852c123f9e5005edf0df08b10bc79" gracePeriod=30 Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.409273 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58986b5dd5-xvhvn" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon" containerID="cri-o://d17e9d19220636464e26901b2afb9198d920c057b8c93058739a04d660e37984" gracePeriod=30 Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.415034 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-qfgt5" event={"ID":"b034a777-04ce-4fe1-baf0-7dd68c64b31f","Type":"ContainerStarted","Data":"b49f9ffc38306b334d4626286a925805e245df79ea946cab8f17f14be3848562"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.418425 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9d769b87-j9282" event={"ID":"f4f888e1-886b-4ab1-8c5d-e0894bf1e065","Type":"ContainerDied","Data":"9040d27862c07cb326367535ab9a0c5edd8d0f76cc5bf4b0321ba2e9e6021ed6"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.418509 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9d769b87-j9282" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.449911 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzl9\" (UniqueName: \"kubernetes.io/projected/367b799b-362f-491f-8bb4-58d617a09769-kube-api-access-tgzl9\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.449949 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/367b799b-362f-491f-8bb4-58d617a09769-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.449962 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/367b799b-362f-491f-8bb4-58d617a09769-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.472561 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c4js" event={"ID":"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5","Type":"ContainerStarted","Data":"b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.482662 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjff6" event={"ID":"30712ba4-9217-4276-b576-798bfd319b45","Type":"ContainerStarted","Data":"b6109c6401d02c7b196afcba8f862f1ca5650abd7cccb405945e2a4e1737d4d0"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.482725 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjff6" event={"ID":"30712ba4-9217-4276-b576-798bfd319b45","Type":"ContainerStarted","Data":"863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.508386 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58986b5dd5-xvhvn" podStartSLOduration=4.063638803 podStartE2EDuration="18.508366162s" podCreationTimestamp="2025-10-14 10:14:21 +0000 UTC" firstStartedPulling="2025-10-14 10:14:23.033963144 +0000 UTC m=+1044.731262560" lastFinishedPulling="2025-10-14 10:14:37.478690503 +0000 UTC m=+1059.175989919" observedRunningTime="2025-10-14 10:14:39.445376252 +0000 UTC m=+1061.142675678" watchObservedRunningTime="2025-10-14 10:14:39.508366162 +0000 UTC m=+1061.205665578" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.531421 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cf95ddffb-6h2bm" event={"ID":"746d0a6a-4df6-40b6-9600-63ec14336507","Type":"ContainerStarted","Data":"ad948d24f36d9d15803356f4b2cad2c5b9397a0e653b3f070fc6857e113779a0"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.535505 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerStarted","Data":"ce70c6d45b8b909887cca5b88c011657b145aa8cb1ebe5aa58ab2a65db61755c"} Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.561460 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.585810 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f9d769b87-j9282"] Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.589985 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xjff6" podStartSLOduration=7.589970618 podStartE2EDuration="7.589970618s" podCreationTimestamp="2025-10-14 10:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:39.507910949 +0000 UTC m=+1061.205210365" watchObservedRunningTime="2025-10-14 10:14:39.589970618 +0000 UTC m=+1061.287270034" Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.803474 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:39 crc kubenswrapper[4698]: I1014 10:14:39.854751 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67d675854f-5dgkt"] Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.563148 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerStarted","Data":"e594988c974ff543f9c2d584ca251398c27ef372098425e5b40ffa91a25c8a5c"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.582979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c4js" event={"ID":"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5","Type":"ContainerStarted","Data":"51f6b710c2bc8b09c94662c8c7962dc21b14702f93fbe6d5919e832065135ca3"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.590471 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b567dfd5d-nvwrp" podStartSLOduration=11.590453145 podStartE2EDuration="11.590453145s" podCreationTimestamp="2025-10-14 10:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:40.589793116 +0000 UTC m=+1062.287092542" watchObservedRunningTime="2025-10-14 10:14:40.590453145 +0000 UTC m=+1062.287752561" Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.611018 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cf95ddffb-6h2bm" event={"ID":"746d0a6a-4df6-40b6-9600-63ec14336507","Type":"ContainerStarted","Data":"94d1c47f482ead204bdf333ca476158d3ee3d343dca54af522867afe33aa9437"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.611464 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cf95ddffb-6h2bm" event={"ID":"746d0a6a-4df6-40b6-9600-63ec14336507","Type":"ContainerStarted","Data":"05a52f202a1cf906340c4829bea7f2dffb79a0630606691e8ba7fbd330058b52"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.620536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerStarted","Data":"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.620589 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9c4js" podStartSLOduration=4.620572291 podStartE2EDuration="4.620572291s" podCreationTimestamp="2025-10-14 10:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:40.616087912 +0000 UTC m=+1062.313387328" watchObservedRunningTime="2025-10-14 10:14:40.620572291 +0000 UTC m=+1062.317871697" Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.628077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerStarted","Data":"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b"} Oct 14 10:14:40 crc kubenswrapper[4698]: I1014 10:14:40.652712 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cf95ddffb-6h2bm" podStartSLOduration=11.652692544 podStartE2EDuration="11.652692544s" podCreationTimestamp="2025-10-14 10:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:40.638598849 +0000 UTC m=+1062.335898275" watchObservedRunningTime="2025-10-14 10:14:40.652692544 +0000 UTC m=+1062.349991970" Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.033991 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367b799b-362f-491f-8bb4-58d617a09769" path="/var/lib/kubelet/pods/367b799b-362f-491f-8bb4-58d617a09769/volumes" Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.034774 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f888e1-886b-4ab1-8c5d-e0894bf1e065" path="/var/lib/kubelet/pods/f4f888e1-886b-4ab1-8c5d-e0894bf1e065/volumes" Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.638809 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerStarted","Data":"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f"} Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.639309 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-log" containerID="cri-o://33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" gracePeriod=30 Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.639818 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-httpd" containerID="cri-o://8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" gracePeriod=30 Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.679591 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-log" containerID="cri-o://802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" gracePeriod=30 Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.680299 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-httpd" containerID="cri-o://21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" gracePeriod=30 Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.680325 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerStarted","Data":"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2"} Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.682120 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.682104183 podStartE2EDuration="16.682104183s" podCreationTimestamp="2025-10-14 10:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:41.67920532 +0000 UTC m=+1063.376504756" watchObservedRunningTime="2025-10-14 10:14:41.682104183 +0000 UTC m=+1063.379403599" Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.684429 4698 generic.go:334] "Generic (PLEG): container finished" podID="19d4fb38-f09b-4383-adfc-12bb06107bfb" containerID="42bf3c73ba3327efa826fffd6a5a8544f7e45d4ea8dd0cddfc97faf1402c257d" exitCode=0 Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.684584 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2qdqz" event={"ID":"19d4fb38-f09b-4383-adfc-12bb06107bfb","Type":"ContainerDied","Data":"42bf3c73ba3327efa826fffd6a5a8544f7e45d4ea8dd0cddfc97faf1402c257d"} Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.716833 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.716816951 podStartE2EDuration="12.716816951s" podCreationTimestamp="2025-10-14 10:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:14:41.716182783 +0000 UTC m=+1063.413482199" watchObservedRunningTime="2025-10-14 10:14:41.716816951 +0000 UTC m=+1063.414116367" Oct 14 10:14:41 crc kubenswrapper[4698]: I1014 10:14:41.995305 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.413424 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.468759 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469030 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469174 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85s2q\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469247 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469292 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469335 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.469429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs\") pod \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\" (UID: \"df1e3a4b-9dec-4b59-80a8-071b9dd62651\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.471399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs" (OuterVolumeSpecName: "logs") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.471589 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.474316 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.482874 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.498078 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q" (OuterVolumeSpecName: "kube-api-access-85s2q") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "kube-api-access-85s2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.500867 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts" (OuterVolumeSpecName: "scripts") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.502107 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph" (OuterVolumeSpecName: "ceph") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.512081 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.541928 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data" (OuterVolumeSpecName: "config-data") pod "df1e3a4b-9dec-4b59-80a8-071b9dd62651" (UID: "df1e3a4b-9dec-4b59-80a8-071b9dd62651"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.571520 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.571987 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572039 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghkzt\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572073 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572111 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572133 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572174 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572230 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572274 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts\") pod \"fdfbb802-f191-4e92-ac52-de08502a8e80\" (UID: \"fdfbb802-f191-4e92-ac52-de08502a8e80\") " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572262 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs" (OuterVolumeSpecName: "logs") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572648 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572675 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572685 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572695 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1e3a4b-9dec-4b59-80a8-071b9dd62651-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572711 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85s2q\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-kube-api-access-85s2q\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572720 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/df1e3a4b-9dec-4b59-80a8-071b9dd62651-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572728 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df1e3a4b-9dec-4b59-80a8-071b9dd62651-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.572738 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.573517 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.577081 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph" (OuterVolumeSpecName: "ceph") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.578858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.579858 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.589948 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts" (OuterVolumeSpecName: "scripts") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.591226 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt" (OuterVolumeSpecName: "kube-api-access-ghkzt") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "kube-api-access-ghkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.599568 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.627978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.646559 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data" (OuterVolumeSpecName: "config-data") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.651619 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fdfbb802-f191-4e92-ac52-de08502a8e80" (UID: "fdfbb802-f191-4e92-ac52-de08502a8e80"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675320 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675356 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675368 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfbb802-f191-4e92-ac52-de08502a8e80-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675377 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675389 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675398 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675434 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675445 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghkzt\" (UniqueName: \"kubernetes.io/projected/fdfbb802-f191-4e92-ac52-de08502a8e80-kube-api-access-ghkzt\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.675455 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfbb802-f191-4e92-ac52-de08502a8e80-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.700342 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733216 4698 generic.go:334] "Generic (PLEG): container finished" podID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerID="21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" exitCode=0 Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733258 4698 generic.go:334] "Generic (PLEG): container finished" podID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerID="802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" exitCode=143 Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733326 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerDied","Data":"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733361 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerDied","Data":"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733371 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdfbb802-f191-4e92-ac52-de08502a8e80","Type":"ContainerDied","Data":"ce70c6d45b8b909887cca5b88c011657b145aa8cb1ebe5aa58ab2a65db61755c"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733402 4698 scope.go:117] "RemoveContainer" containerID="21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.733413 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738457 4698 generic.go:334] "Generic (PLEG): container finished" podID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerID="8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" exitCode=0 Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738490 4698 generic.go:334] "Generic (PLEG): container finished" podID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerID="33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" exitCode=143 Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerDied","Data":"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738556 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerDied","Data":"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738567 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df1e3a4b-9dec-4b59-80a8-071b9dd62651","Type":"ContainerDied","Data":"6fd7fbdc25686461684050be990ac23c6ba734aef7ef4a8e8e8c71372471edcf"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.738626 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.742895 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerStarted","Data":"1afee696e504730f0049af836091b4f76382bfbb64f7e1b9e1e6d76a7978bdc7"} Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.771263 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.778191 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.791670 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.794602 4698 scope.go:117] "RemoveContainer" containerID="802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.801593 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806670 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806704 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806727 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806734 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806756 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806785 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806798 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806803 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806817 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806825 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.806837 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="init" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.806843 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="init" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.807903 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.807927 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.807937 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="414ba38b-6cfb-48ae-b818-6f8544558bf1" containerName="dnsmasq-dns" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.807948 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-httpd" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.807959 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" containerName="glance-log" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.809539 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.812561 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-x9n8v" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.813151 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.813317 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.813532 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.813673 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.834429 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.841803 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.852177 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.864835 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.866482 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.872354 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.874055 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.893873 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895011 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m55\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895162 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895182 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895263 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.895282 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.948215 4698 scope.go:117] "RemoveContainer" containerID="21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.949571 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2\": container with ID starting with 21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2 not found: ID does not exist" containerID="21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.949615 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2"} err="failed to get container status \"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2\": rpc error: code = NotFound desc = could not find container \"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2\": container with ID starting with 21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2 not found: ID does not exist" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.949650 4698 scope.go:117] "RemoveContainer" containerID="802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" Oct 14 10:14:42 crc kubenswrapper[4698]: E1014 10:14:42.952040 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068\": container with ID starting with 802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068 not found: ID does not exist" containerID="802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.952078 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068"} err="failed to get container status \"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068\": rpc error: code = NotFound desc = could not find container \"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068\": container with ID starting with 802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068 not found: ID does not exist" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.952099 4698 scope.go:117] "RemoveContainer" containerID="21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.952822 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2"} err="failed to get container status \"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2\": rpc error: code = NotFound desc = could not find container \"21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2\": container with ID starting with 21622bcbfff0972645debe264df8e8319a21925e069d8100037cce0298888ba2 not found: ID does not exist" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.952858 4698 scope.go:117] "RemoveContainer" containerID="802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.953887 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068"} err="failed to get container status \"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068\": rpc error: code = NotFound desc = could not find container \"802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068\": container with ID starting with 802c966ea9003518ff0b700d3ed2eace9d37edc139d0e113d6f0cb3488a38068 not found: ID does not exist" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.953928 4698 scope.go:117] "RemoveContainer" containerID="8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.985465 4698 scope.go:117] "RemoveContainer" containerID="33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.997814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.997865 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.997904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.997943 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.997983 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998031 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998234 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998260 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7frzb\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998279 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998315 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998359 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998406 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2m55\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.998431 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:42 crc kubenswrapper[4698]: I1014 10:14:42.999445 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.001903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.002156 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.004694 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.013179 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.031055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.035446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.050250 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2m55\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.063484 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.064516 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1e3a4b-9dec-4b59-80a8-071b9dd62651" path="/var/lib/kubelet/pods/df1e3a4b-9dec-4b59-80a8-071b9dd62651/volumes" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.065431 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfbb802-f191-4e92-ac52-de08502a8e80" path="/var/lib/kubelet/pods/fdfbb802-f191-4e92-ac52-de08502a8e80/volumes" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.073304 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114117 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114164 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114208 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114248 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114275 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114382 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7frzb\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114429 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.114804 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.115244 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.116497 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.130168 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.134048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.136726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.137186 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.137712 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.160565 4698 scope.go:117] "RemoveContainer" containerID="8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.166890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7frzb\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: E1014 10:14:43.170602 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f\": container with ID starting with 8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f not found: ID does not exist" containerID="8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.170656 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f"} err="failed to get container status \"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f\": rpc error: code = NotFound desc = could not find container \"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f\": container with ID starting with 8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f not found: ID does not exist" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.170685 4698 scope.go:117] "RemoveContainer" containerID="33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" Oct 14 10:14:43 crc kubenswrapper[4698]: E1014 10:14:43.171479 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b\": container with ID starting with 33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b not found: ID does not exist" containerID="33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.171509 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b"} err="failed to get container status \"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b\": rpc error: code = NotFound desc = could not find container \"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b\": container with ID starting with 33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b not found: ID does not exist" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.171526 4698 scope.go:117] "RemoveContainer" containerID="8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.179148 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f"} err="failed to get container status \"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f\": rpc error: code = NotFound desc = could not find container \"8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f\": container with ID starting with 8a3fe7e2a56d3f651dbffe3945a6dad4881a777d15ff39ca945e69518dd1413f not found: ID does not exist" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.179228 4698 scope.go:117] "RemoveContainer" containerID="33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.183049 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b"} err="failed to get container status \"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b\": rpc error: code = NotFound desc = could not find container \"33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b\": container with ID starting with 33bcea4e62e5687e7cdcc4ec2ce354b2de45eab1b9bd9e38951c8779fe0b630b not found: ID does not exist" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.195001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.219915 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.253475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.258086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.317961 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts\") pod \"19d4fb38-f09b-4383-adfc-12bb06107bfb\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.318143 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data\") pod \"19d4fb38-f09b-4383-adfc-12bb06107bfb\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.318198 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9jgp\" (UniqueName: \"kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp\") pod \"19d4fb38-f09b-4383-adfc-12bb06107bfb\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.318264 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle\") pod \"19d4fb38-f09b-4383-adfc-12bb06107bfb\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.318327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs\") pod \"19d4fb38-f09b-4383-adfc-12bb06107bfb\" (UID: \"19d4fb38-f09b-4383-adfc-12bb06107bfb\") " Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.319130 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs" (OuterVolumeSpecName: "logs") pod "19d4fb38-f09b-4383-adfc-12bb06107bfb" (UID: "19d4fb38-f09b-4383-adfc-12bb06107bfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.322455 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts" (OuterVolumeSpecName: "scripts") pod "19d4fb38-f09b-4383-adfc-12bb06107bfb" (UID: "19d4fb38-f09b-4383-adfc-12bb06107bfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.323269 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp" (OuterVolumeSpecName: "kube-api-access-w9jgp") pod "19d4fb38-f09b-4383-adfc-12bb06107bfb" (UID: "19d4fb38-f09b-4383-adfc-12bb06107bfb"). InnerVolumeSpecName "kube-api-access-w9jgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.349683 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19d4fb38-f09b-4383-adfc-12bb06107bfb" (UID: "19d4fb38-f09b-4383-adfc-12bb06107bfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.378732 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data" (OuterVolumeSpecName: "config-data") pod "19d4fb38-f09b-4383-adfc-12bb06107bfb" (UID: "19d4fb38-f09b-4383-adfc-12bb06107bfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.426974 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.427008 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9jgp\" (UniqueName: \"kubernetes.io/projected/19d4fb38-f09b-4383-adfc-12bb06107bfb-kube-api-access-w9jgp\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.427018 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.427027 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d4fb38-f09b-4383-adfc-12bb06107bfb-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.427034 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19d4fb38-f09b-4383-adfc-12bb06107bfb-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.757172 4698 generic.go:334] "Generic (PLEG): container finished" podID="30712ba4-9217-4276-b576-798bfd319b45" containerID="b6109c6401d02c7b196afcba8f862f1ca5650abd7cccb405945e2a4e1737d4d0" exitCode=0 Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.757244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjff6" event={"ID":"30712ba4-9217-4276-b576-798bfd319b45","Type":"ContainerDied","Data":"b6109c6401d02c7b196afcba8f862f1ca5650abd7cccb405945e2a4e1737d4d0"} Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.772866 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2qdqz" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.772987 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2qdqz" event={"ID":"19d4fb38-f09b-4383-adfc-12bb06107bfb","Type":"ContainerDied","Data":"9b2fd36f3a5c712dac7eb00e41120b0e29f8e750ed08898aeef71d2b3fe2296d"} Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.773035 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b2fd36f3a5c712dac7eb00e41120b0e29f8e750ed08898aeef71d2b3fe2296d" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.906156 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dd765df5b-xsd5h"] Oct 14 10:14:43 crc kubenswrapper[4698]: E1014 10:14:43.907082 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d4fb38-f09b-4383-adfc-12bb06107bfb" containerName="placement-db-sync" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.907107 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d4fb38-f09b-4383-adfc-12bb06107bfb" containerName="placement-db-sync" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.907346 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d4fb38-f09b-4383-adfc-12bb06107bfb" containerName="placement-db-sync" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.908554 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.916224 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nfpmv" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.916269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.916400 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.916601 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.916896 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 14 10:14:43 crc kubenswrapper[4698]: I1014 10:14:43.921467 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dd765df5b-xsd5h"] Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.030788 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.043995 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-config-data\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044042 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-scripts\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044065 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-combined-ca-bundle\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044091 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-public-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044147 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25021023-544e-4b23-947b-66102dcf790e-logs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044181 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb5t7\" (UniqueName: \"kubernetes.io/projected/25021023-544e-4b23-947b-66102dcf790e-kube-api-access-gb5t7\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.044240 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-internal-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146026 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-scripts\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146073 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-combined-ca-bundle\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-public-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146218 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25021023-544e-4b23-947b-66102dcf790e-logs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146269 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb5t7\" (UniqueName: \"kubernetes.io/projected/25021023-544e-4b23-947b-66102dcf790e-kube-api-access-gb5t7\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-internal-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.146439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-config-data\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.149189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25021023-544e-4b23-947b-66102dcf790e-logs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.157332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-public-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.156364 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-combined-ca-bundle\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.158624 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-config-data\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.176917 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-internal-tls-certs\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.178431 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25021023-544e-4b23-947b-66102dcf790e-scripts\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.183358 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb5t7\" (UniqueName: \"kubernetes.io/projected/25021023-544e-4b23-947b-66102dcf790e-kube-api-access-gb5t7\") pod \"placement-5dd765df5b-xsd5h\" (UID: \"25021023-544e-4b23-947b-66102dcf790e\") " pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.258235 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:14:44 crc kubenswrapper[4698]: I1014 10:14:44.267051 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.058436 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.059555 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.060583 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.176577 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.176642 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:14:50 crc kubenswrapper[4698]: I1014 10:14:50.178913 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cf95ddffb-6h2bm" podUID="746d0a6a-4df6-40b6-9600-63ec14336507" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 10:14:51 crc kubenswrapper[4698]: W1014 10:14:51.366676 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod339c0475_be6d_48a1_af88_8c3f55eaf50a.slice/crio-f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3 WatchSource:0}: Error finding container f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3: Status 404 returned error can't find the container with id f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3 Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.476101 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.623358 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.623956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.624065 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.624094 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.624158 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.624243 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js74g\" (UniqueName: \"kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g\") pod \"30712ba4-9217-4276-b576-798bfd319b45\" (UID: \"30712ba4-9217-4276-b576-798bfd319b45\") " Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.629978 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts" (OuterVolumeSpecName: "scripts") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.633140 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.633755 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.642980 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g" (OuterVolumeSpecName: "kube-api-access-js74g") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "kube-api-access-js74g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.652401 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.665076 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data" (OuterVolumeSpecName: "config-data") pod "30712ba4-9217-4276-b576-798bfd319b45" (UID: "30712ba4-9217-4276-b576-798bfd319b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726235 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726283 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js74g\" (UniqueName: \"kubernetes.io/projected/30712ba4-9217-4276-b576-798bfd319b45-kube-api-access-js74g\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726296 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726309 4698 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726322 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.726331 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30712ba4-9217-4276-b576-798bfd319b45-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.870306 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerStarted","Data":"933fc9083ede68c66aae0877f6b77c000e45be70b927bf91c3c8a452afb50155"} Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.871993 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xjff6" event={"ID":"30712ba4-9217-4276-b576-798bfd319b45","Type":"ContainerDied","Data":"863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9"} Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.872036 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="863e0f34d4b0f296bc08a72ead4f9b60354e43f61853b45ebca5fc06e959aee9" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.872094 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xjff6" Oct 14 10:14:51 crc kubenswrapper[4698]: I1014 10:14:51.873659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerStarted","Data":"f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3"} Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.672405 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75d9cb9c4-g8g58"] Oct 14 10:14:52 crc kubenswrapper[4698]: E1014 10:14:52.673063 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30712ba4-9217-4276-b576-798bfd319b45" containerName="keystone-bootstrap" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.673088 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="30712ba4-9217-4276-b576-798bfd319b45" containerName="keystone-bootstrap" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.673465 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="30712ba4-9217-4276-b576-798bfd319b45" containerName="keystone-bootstrap" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.674728 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.680383 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.681200 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.681680 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.681926 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mrbzf" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.681685 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.683521 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.685111 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75d9cb9c4-g8g58"] Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746064 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-scripts\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746179 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-internal-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-fernet-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746268 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-credential-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-config-data\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746356 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-combined-ca-bundle\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746435 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-public-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.746468 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhqb\" (UniqueName: \"kubernetes.io/projected/fddbac4f-ca34-45b0-913b-21e399aab117-kube-api-access-pkhqb\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848238 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-public-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848291 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhqb\" (UniqueName: \"kubernetes.io/projected/fddbac4f-ca34-45b0-913b-21e399aab117-kube-api-access-pkhqb\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-scripts\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848416 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-internal-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848442 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-fernet-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848486 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-credential-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-config-data\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.848560 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-combined-ca-bundle\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.853396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-fernet-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.853593 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-credential-keys\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.854159 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-combined-ca-bundle\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.854314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-config-data\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.854590 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-public-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.855601 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-internal-tls-certs\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.856162 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddbac4f-ca34-45b0-913b-21e399aab117-scripts\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:52 crc kubenswrapper[4698]: I1014 10:14:52.869475 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhqb\" (UniqueName: \"kubernetes.io/projected/fddbac4f-ca34-45b0-913b-21e399aab117-kube-api-access-pkhqb\") pod \"keystone-75d9cb9c4-g8g58\" (UID: \"fddbac4f-ca34-45b0-913b-21e399aab117\") " pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:14:53 crc kubenswrapper[4698]: I1014 10:14:53.008264 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.058614 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.136547 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst"] Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.138115 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.140422 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.143041 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.160255 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst"] Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.175541 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cf95ddffb-6h2bm" podUID="746d0a6a-4df6-40b6-9600-63ec14336507" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.292079 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.292248 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.292269 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gpkn\" (UniqueName: \"kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.395450 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.395527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.395783 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.395815 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gpkn\" (UniqueName: \"kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.404275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.417235 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gpkn\" (UniqueName: \"kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn\") pod \"collect-profiles-29340615-82wst\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.469739 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.961835 4698 generic.go:334] "Generic (PLEG): container finished" podID="2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" containerID="51f6b710c2bc8b09c94662c8c7962dc21b14702f93fbe6d5919e832065135ca3" exitCode=0 Oct 14 10:15:00 crc kubenswrapper[4698]: I1014 10:15:00.962072 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c4js" event={"ID":"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5","Type":"ContainerDied","Data":"51f6b710c2bc8b09c94662c8c7962dc21b14702f93fbe6d5919e832065135ca3"} Oct 14 10:15:02 crc kubenswrapper[4698]: E1014 10:15:02.474557 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Oct 14 10:15:02 crc kubenswrapper[4698]: E1014 10:15:02.475001 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wq6kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nbmlr_openstack(94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:15:02 crc kubenswrapper[4698]: E1014 10:15:02.476407 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nbmlr" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" Oct 14 10:15:02 crc kubenswrapper[4698]: E1014 10:15:02.995500 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-nbmlr" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" Oct 14 10:15:03 crc kubenswrapper[4698]: E1014 10:15:03.451473 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Oct 14 10:15:03 crc kubenswrapper[4698]: E1014 10:15:03.451663 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2klf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:15:04 crc kubenswrapper[4698]: E1014 10:15:04.503655 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 14 10:15:04 crc kubenswrapper[4698]: E1014 10:15:04.504216 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89d95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-thrh8_openstack(d90a3be7-6827-427d-9ed1-3aef79542b6d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:15:04 crc kubenswrapper[4698]: E1014 10:15:04.505403 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-thrh8" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.647535 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c4js" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.785659 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config\") pod \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.785976 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle\") pod \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.786132 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxb2b\" (UniqueName: \"kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b\") pod \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\" (UID: \"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5\") " Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.795442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b" (OuterVolumeSpecName: "kube-api-access-dxb2b") pod "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" (UID: "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5"). InnerVolumeSpecName "kube-api-access-dxb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.823496 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config" (OuterVolumeSpecName: "config") pod "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" (UID: "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.829160 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" (UID: "2abd1f71-b2d4-4c95-898c-bcfe99b2acf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.889213 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxb2b\" (UniqueName: \"kubernetes.io/projected/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-kube-api-access-dxb2b\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.889240 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.889252 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:04 crc kubenswrapper[4698]: I1014 10:15:04.939623 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dd765df5b-xsd5h"] Oct 14 10:15:04 crc kubenswrapper[4698]: W1014 10:15:04.947724 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25021023_544e_4b23_947b_66102dcf790e.slice/crio-90e9a1dbfb2bb6eae3284f17b61629e83f1b871c51a4708233303ca6e7a2448c WatchSource:0}: Error finding container 90e9a1dbfb2bb6eae3284f17b61629e83f1b871c51a4708233303ca6e7a2448c: Status 404 returned error can't find the container with id 90e9a1dbfb2bb6eae3284f17b61629e83f1b871c51a4708233303ca6e7a2448c Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.010271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd765df5b-xsd5h" event={"ID":"25021023-544e-4b23-947b-66102dcf790e","Type":"ContainerStarted","Data":"90e9a1dbfb2bb6eae3284f17b61629e83f1b871c51a4708233303ca6e7a2448c"} Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.015308 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c4js" event={"ID":"2abd1f71-b2d4-4c95-898c-bcfe99b2acf5","Type":"ContainerDied","Data":"b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a"} Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.015366 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d3372179873e3bd9c8927e594245887627490358c87789ceb23188b7967d8a" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.015337 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c4js" Oct 14 10:15:05 crc kubenswrapper[4698]: E1014 10:15:05.021695 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-thrh8" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.079715 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75d9cb9c4-g8g58"] Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.086655 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst"] Oct 14 10:15:05 crc kubenswrapper[4698]: W1014 10:15:05.109937 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfddbac4f_ca34_45b0_913b_21e399aab117.slice/crio-cd9a0e92720c9ecc3e74cccd0d2db526a45ad6c7cbea8600b90113cde79a1322 WatchSource:0}: Error finding container cd9a0e92720c9ecc3e74cccd0d2db526a45ad6c7cbea8600b90113cde79a1322: Status 404 returned error can't find the container with id cd9a0e92720c9ecc3e74cccd0d2db526a45ad6c7cbea8600b90113cde79a1322 Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.934425 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:05 crc kubenswrapper[4698]: E1014 10:15:05.935337 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" containerName="neutron-db-sync" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.935350 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" containerName="neutron-db-sync" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.935547 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" containerName="neutron-db-sync" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.942862 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:05 crc kubenswrapper[4698]: I1014 10:15:05.964261 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013229 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013277 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013333 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013412 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013548 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mplh7\" (UniqueName: \"kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.013592 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.077755 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.079672 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.083436 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.083495 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.084788 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.084978 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2k2km" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.092061 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.099671 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd765df5b-xsd5h" event={"ID":"25021023-544e-4b23-947b-66102dcf790e","Type":"ContainerStarted","Data":"a2b41b012c1ede9c56708a19584d1cf78d9ead8e85eec976266e8b4da5177ee6"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.099714 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd765df5b-xsd5h" event={"ID":"25021023-544e-4b23-947b-66102dcf790e","Type":"ContainerStarted","Data":"14960cd99e08595695ced0442ebc56cfdda09c5e972766ce3705a6599f0d4f43"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.099754 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.099863 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.108563 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75d9cb9c4-g8g58" event={"ID":"fddbac4f-ca34-45b0-913b-21e399aab117","Type":"ContainerStarted","Data":"0852d9ad12e8ef41e9fa412f455aebe77dbaf5fc23facd7e7d16bbd32289264d"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.108599 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75d9cb9c4-g8g58" event={"ID":"fddbac4f-ca34-45b0-913b-21e399aab117","Type":"ContainerStarted","Data":"cd9a0e92720c9ecc3e74cccd0d2db526a45ad6c7cbea8600b90113cde79a1322"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.108647 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.115841 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.115902 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.115925 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.115962 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.116704 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.116708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.117211 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.118962 4698 generic.go:334] "Generic (PLEG): container finished" podID="dacf27c8-3dc7-4f98-ac16-80138c8dbbac" containerID="3d7fccb445739e35282b73006542b2a768ec71ad150eafa560f6d34eed0aa4f6" exitCode=0 Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.119130 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" event={"ID":"dacf27c8-3dc7-4f98-ac16-80138c8dbbac","Type":"ContainerDied","Data":"3d7fccb445739e35282b73006542b2a768ec71ad150eafa560f6d34eed0aa4f6"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.119158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" event={"ID":"dacf27c8-3dc7-4f98-ac16-80138c8dbbac","Type":"ContainerStarted","Data":"ab42e647fef4ed445178c090552743bc5050565292c5dacad8c0e873096726bb"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.119243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.119414 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mplh7\" (UniqueName: \"kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.119448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.120046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.149012 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerStarted","Data":"386102c65388fad9deb6f3de090829e956a0f05bec6707acaabf79a9eb363e43"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.149054 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerStarted","Data":"fe1ae4f98d73812377657f7d3d462daa5749aeb44eca817da5e339d837c16a39"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.156291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mplh7\" (UniqueName: \"kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7\") pod \"dnsmasq-dns-84b966f6c9-wwxgc\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.156644 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-qfgt5" event={"ID":"b034a777-04ce-4fe1-baf0-7dd68c64b31f","Type":"ContainerStarted","Data":"6fd413d50ddc394ba6745ff4d2dadf33650b7992176640e1fd078ba7836add91"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.160413 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dd765df5b-xsd5h" podStartSLOduration=23.160393159 podStartE2EDuration="23.160393159s" podCreationTimestamp="2025-10-14 10:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:06.13535988 +0000 UTC m=+1087.832659306" watchObservedRunningTime="2025-10-14 10:15:06.160393159 +0000 UTC m=+1087.857692575" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.161174 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerStarted","Data":"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.161210 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerStarted","Data":"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16"} Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.173997 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75d9cb9c4-g8g58" podStartSLOduration=14.17397799 podStartE2EDuration="14.17397799s" podCreationTimestamp="2025-10-14 10:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:06.169845891 +0000 UTC m=+1087.867145317" watchObservedRunningTime="2025-10-14 10:15:06.17397799 +0000 UTC m=+1087.871277406" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.220868 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gnz\" (UniqueName: \"kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.223111 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.223244 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.223498 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.223609 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.273580 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-qfgt5" podStartSLOduration=4.472269799 podStartE2EDuration="30.273544052s" podCreationTimestamp="2025-10-14 10:14:36 +0000 UTC" firstStartedPulling="2025-10-14 10:14:38.681347681 +0000 UTC m=+1060.378647097" lastFinishedPulling="2025-10-14 10:15:04.482621934 +0000 UTC m=+1086.179921350" observedRunningTime="2025-10-14 10:15:06.208840892 +0000 UTC m=+1087.906140308" watchObservedRunningTime="2025-10-14 10:15:06.273544052 +0000 UTC m=+1087.970843468" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.286692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.306757 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.306738016 podStartE2EDuration="24.306738016s" podCreationTimestamp="2025-10-14 10:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:06.246163275 +0000 UTC m=+1087.943462691" watchObservedRunningTime="2025-10-14 10:15:06.306738016 +0000 UTC m=+1088.004037442" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.326041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.326113 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.326169 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2gnz\" (UniqueName: \"kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.326191 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.326228 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.349329 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.353688 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.354457 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.355455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.358554 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=24.358540735 podStartE2EDuration="24.358540735s" podCreationTimestamp="2025-10-14 10:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:06.285269909 +0000 UTC m=+1087.982569345" watchObservedRunningTime="2025-10-14 10:15:06.358540735 +0000 UTC m=+1088.055840151" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.383374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2gnz\" (UniqueName: \"kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz\") pod \"neutron-df4467494-hnvp2\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.417910 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:06 crc kubenswrapper[4698]: W1014 10:15:06.932603 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c6fefd_814f_4f20_8a30_b76d3b6a43ba.slice/crio-2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e WatchSource:0}: Error finding container 2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e: Status 404 returned error can't find the container with id 2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e Oct 14 10:15:06 crc kubenswrapper[4698]: I1014 10:15:06.942111 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.185256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" event={"ID":"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba","Type":"ContainerStarted","Data":"2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e"} Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.305310 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:07 crc kubenswrapper[4698]: W1014 10:15:07.326729 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda64a5c90_1d3c_47da_9d3c_1ca749c00bad.slice/crio-0a7b5a4abf79bb93718884b24053d9f3fae69e57b70eb7e96042bc93a19c4c77 WatchSource:0}: Error finding container 0a7b5a4abf79bb93718884b24053d9f3fae69e57b70eb7e96042bc93a19c4c77: Status 404 returned error can't find the container with id 0a7b5a4abf79bb93718884b24053d9f3fae69e57b70eb7e96042bc93a19c4c77 Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.472508 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.566077 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume\") pod \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.566898 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume\") pod \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.567136 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gpkn\" (UniqueName: \"kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn\") pod \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\" (UID: \"dacf27c8-3dc7-4f98-ac16-80138c8dbbac\") " Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.566924 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume" (OuterVolumeSpecName: "config-volume") pod "dacf27c8-3dc7-4f98-ac16-80138c8dbbac" (UID: "dacf27c8-3dc7-4f98-ac16-80138c8dbbac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.568066 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.571573 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn" (OuterVolumeSpecName: "kube-api-access-5gpkn") pod "dacf27c8-3dc7-4f98-ac16-80138c8dbbac" (UID: "dacf27c8-3dc7-4f98-ac16-80138c8dbbac"). InnerVolumeSpecName "kube-api-access-5gpkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.571714 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dacf27c8-3dc7-4f98-ac16-80138c8dbbac" (UID: "dacf27c8-3dc7-4f98-ac16-80138c8dbbac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.670112 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:07 crc kubenswrapper[4698]: I1014 10:15:07.670154 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gpkn\" (UniqueName: \"kubernetes.io/projected/dacf27c8-3dc7-4f98-ac16-80138c8dbbac-kube-api-access-5gpkn\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.200280 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cf664b6c9-t6wfc"] Oct 14 10:15:08 crc kubenswrapper[4698]: E1014 10:15:08.200942 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacf27c8-3dc7-4f98-ac16-80138c8dbbac" containerName="collect-profiles" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.200956 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacf27c8-3dc7-4f98-ac16-80138c8dbbac" containerName="collect-profiles" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.201151 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacf27c8-3dc7-4f98-ac16-80138c8dbbac" containerName="collect-profiles" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.206625 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.206675 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerStarted","Data":"135ee1279718ac6f4b4c9e89f3787f41db1c20d9dd50fe00fff08e50d0bc18e0"} Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.206702 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerStarted","Data":"1cef027ac4a153809efa7b4630e617ad142f7242e6bd7a57ec9bf78b8aa9f9c9"} Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.206718 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerStarted","Data":"0a7b5a4abf79bb93718884b24053d9f3fae69e57b70eb7e96042bc93a19c4c77"} Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.206756 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.207001 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" event={"ID":"dacf27c8-3dc7-4f98-ac16-80138c8dbbac","Type":"ContainerDied","Data":"ab42e647fef4ed445178c090552743bc5050565292c5dacad8c0e873096726bb"} Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.207033 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab42e647fef4ed445178c090552743bc5050565292c5dacad8c0e873096726bb" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.207556 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.212861 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.213081 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.222564 4698 generic.go:334] "Generic (PLEG): container finished" podID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerID="a3e7f024917d736a019536c08327e3cfcb12b76ce64cd1734049107a960bd061" exitCode=0 Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.222611 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" event={"ID":"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba","Type":"ContainerDied","Data":"a3e7f024917d736a019536c08327e3cfcb12b76ce64cd1734049107a960bd061"} Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.226611 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cf664b6c9-t6wfc"] Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.233898 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-df4467494-hnvp2" podStartSLOduration=2.233885198 podStartE2EDuration="2.233885198s" podCreationTimestamp="2025-10-14 10:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:08.23362081 +0000 UTC m=+1089.930920226" watchObservedRunningTime="2025-10-14 10:15:08.233885198 +0000 UTC m=+1089.931184614" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.282806 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-public-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.282925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg4z\" (UniqueName: \"kubernetes.io/projected/0082817f-4bcf-434b-8fb7-1e8ae2acf058-kube-api-access-4tg4z\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.282973 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-httpd-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.283093 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-internal-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.283367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-ovndb-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.283443 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-combined-ca-bundle\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.283470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-ovndb-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-combined-ca-bundle\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385360 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-public-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385384 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tg4z\" (UniqueName: \"kubernetes.io/projected/0082817f-4bcf-434b-8fb7-1e8ae2acf058-kube-api-access-4tg4z\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385402 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-httpd-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.385423 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-internal-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.391425 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-httpd-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.391445 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-ovndb-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.391945 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-public-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.392616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-combined-ca-bundle\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.393968 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-config\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.397078 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0082817f-4bcf-434b-8fb7-1e8ae2acf058-internal-tls-certs\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.412880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tg4z\" (UniqueName: \"kubernetes.io/projected/0082817f-4bcf-434b-8fb7-1e8ae2acf058-kube-api-access-4tg4z\") pod \"neutron-5cf664b6c9-t6wfc\" (UID: \"0082817f-4bcf-434b-8fb7-1e8ae2acf058\") " pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:08 crc kubenswrapper[4698]: I1014 10:15:08.529533 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:09 crc kubenswrapper[4698]: I1014 10:15:09.240928 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" event={"ID":"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba","Type":"ContainerStarted","Data":"50816bb6c791d0061cb027708547cabdce9ae96816de3a6d22de87d758cdf8fd"} Oct 14 10:15:09 crc kubenswrapper[4698]: I1014 10:15:09.241324 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:09 crc kubenswrapper[4698]: I1014 10:15:09.266226 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" podStartSLOduration=4.266202261 podStartE2EDuration="4.266202261s" podCreationTimestamp="2025-10-14 10:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:09.25851541 +0000 UTC m=+1090.955814836" watchObservedRunningTime="2025-10-14 10:15:09.266202261 +0000 UTC m=+1090.963501667" Oct 14 10:15:10 crc kubenswrapper[4698]: I1014 10:15:10.253960 4698 generic.go:334] "Generic (PLEG): container finished" podID="2d11f65a-1351-4490-842c-259c6611ed6f" containerID="d17e9d19220636464e26901b2afb9198d920c057b8c93058739a04d660e37984" exitCode=137 Oct 14 10:15:10 crc kubenswrapper[4698]: I1014 10:15:10.254275 4698 generic.go:334] "Generic (PLEG): container finished" podID="2d11f65a-1351-4490-842c-259c6611ed6f" containerID="72d5c35add8dacbd016dbaa9022b4a42eb5852c123f9e5005edf0df08b10bc79" exitCode=137 Oct 14 10:15:10 crc kubenswrapper[4698]: I1014 10:15:10.254049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerDied","Data":"d17e9d19220636464e26901b2afb9198d920c057b8c93058739a04d660e37984"} Oct 14 10:15:10 crc kubenswrapper[4698]: I1014 10:15:10.254415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerDied","Data":"72d5c35add8dacbd016dbaa9022b4a42eb5852c123f9e5005edf0df08b10bc79"} Oct 14 10:15:11 crc kubenswrapper[4698]: I1014 10:15:11.963821 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:15:12 crc kubenswrapper[4698]: I1014 10:15:12.117735 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.254561 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.254719 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.254835 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.255096 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.258616 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.258679 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.258691 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.258700 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.304030 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.306065 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.326930 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.328487 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.665802 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.798632 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cf95ddffb-6h2bm" Oct 14 10:15:13 crc kubenswrapper[4698]: I1014 10:15:13.854705 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:15:14 crc kubenswrapper[4698]: I1014 10:15:14.305815 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon-log" containerID="cri-o://393471ee803b2f6bdb94dbb502c32fa759670f44814d5f995e9836fa400b1b05" gracePeriod=30 Oct 14 10:15:14 crc kubenswrapper[4698]: I1014 10:15:14.306232 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" containerID="cri-o://e594988c974ff543f9c2d584ca251398c27ef372098425e5b40ffa91a25c8a5c" gracePeriod=30 Oct 14 10:15:15 crc kubenswrapper[4698]: I1014 10:15:15.700857 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:15:15 crc kubenswrapper[4698]: I1014 10:15:15.761335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dd765df5b-xsd5h" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.177279 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.288899 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.341155 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.341474 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="dnsmasq-dns" containerID="cri-o://01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa" gracePeriod=10 Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.765947 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.880387 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts\") pod \"2d11f65a-1351-4490-842c-259c6611ed6f\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.881203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data\") pod \"2d11f65a-1351-4490-842c-259c6611ed6f\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.881402 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhqzf\" (UniqueName: \"kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf\") pod \"2d11f65a-1351-4490-842c-259c6611ed6f\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.881509 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs\") pod \"2d11f65a-1351-4490-842c-259c6611ed6f\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.881857 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key\") pod \"2d11f65a-1351-4490-842c-259c6611ed6f\" (UID: \"2d11f65a-1351-4490-842c-259c6611ed6f\") " Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.883645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs" (OuterVolumeSpecName: "logs") pod "2d11f65a-1351-4490-842c-259c6611ed6f" (UID: "2d11f65a-1351-4490-842c-259c6611ed6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.898714 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf" (OuterVolumeSpecName: "kube-api-access-qhqzf") pod "2d11f65a-1351-4490-842c-259c6611ed6f" (UID: "2d11f65a-1351-4490-842c-259c6611ed6f"). InnerVolumeSpecName "kube-api-access-qhqzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.909927 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2d11f65a-1351-4490-842c-259c6611ed6f" (UID: "2d11f65a-1351-4490-842c-259c6611ed6f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.942711 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts" (OuterVolumeSpecName: "scripts") pod "2d11f65a-1351-4490-842c-259c6611ed6f" (UID: "2d11f65a-1351-4490-842c-259c6611ed6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.945601 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data" (OuterVolumeSpecName: "config-data") pod "2d11f65a-1351-4490-842c-259c6611ed6f" (UID: "2d11f65a-1351-4490-842c-259c6611ed6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.984882 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.984913 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d11f65a-1351-4490-842c-259c6611ed6f-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.984926 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhqzf\" (UniqueName: \"kubernetes.io/projected/2d11f65a-1351-4490-842c-259c6611ed6f-kube-api-access-qhqzf\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.984960 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d11f65a-1351-4490-842c-259c6611ed6f-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:16 crc kubenswrapper[4698]: I1014 10:15:16.984972 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2d11f65a-1351-4490-842c-259c6611ed6f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.112423 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.193343 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.193454 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194104 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bmbk\" (UniqueName: \"kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194255 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194520 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194569 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.194592 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0\") pod \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\" (UID: \"ca4258ec-6a3b-414c-9556-4ce7c99349bd\") " Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.240690 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk" (OuterVolumeSpecName: "kube-api-access-7bmbk") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "kube-api-access-7bmbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.281355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config" (OuterVolumeSpecName: "config") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.297661 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bmbk\" (UniqueName: \"kubernetes.io/projected/ca4258ec-6a3b-414c-9556-4ce7c99349bd-kube-api-access-7bmbk\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.297762 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.308099 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.310689 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.312061 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: E1014 10:15:17.322640 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.343466 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58986b5dd5-xvhvn" event={"ID":"2d11f65a-1351-4490-842c-259c6611ed6f","Type":"ContainerDied","Data":"4835b9008550f1a016dd3c5100a7df4337deffad22e763ed4a322110e155d6ff"} Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.343512 4698 scope.go:117] "RemoveContainer" containerID="d17e9d19220636464e26901b2afb9198d920c057b8c93058739a04d660e37984" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.343634 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58986b5dd5-xvhvn" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.345254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca4258ec-6a3b-414c-9556-4ce7c99349bd" (UID: "ca4258ec-6a3b-414c-9556-4ce7c99349bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.346980 4698 generic.go:334] "Generic (PLEG): container finished" podID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerID="01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa" exitCode=0 Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.347025 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" event={"ID":"ca4258ec-6a3b-414c-9556-4ce7c99349bd","Type":"ContainerDied","Data":"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa"} Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.347041 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" event={"ID":"ca4258ec-6a3b-414c-9556-4ce7c99349bd","Type":"ContainerDied","Data":"89c98fb9abe660157d5943b56afe4d5d61b541a11daaeb1a0c3327451bff110a"} Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.347095 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f5kv7" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.358389 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerStarted","Data":"7744bc3ece636640cf932f302f2388ac604aab827b60125c6bf6f774e96d49a8"} Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.358607 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-central-agent" containerID="cri-o://5d738c6fa016f4c382ae548edd0c96d7f3832727c5cba761ed5948bcbcebfdc2" gracePeriod=30 Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.358966 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.359033 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="proxy-httpd" containerID="cri-o://7744bc3ece636640cf932f302f2388ac604aab827b60125c6bf6f774e96d49a8" gracePeriod=30 Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.359084 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-notification-agent" containerID="cri-o://1afee696e504730f0049af836091b4f76382bfbb64f7e1b9e1e6d76a7978bdc7" gracePeriod=30 Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.374629 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.394237 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.399454 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.399679 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.399688 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.399696 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca4258ec-6a3b-414c-9556-4ce7c99349bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.418344 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58986b5dd5-xvhvn"] Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.434572 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.447345 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f5kv7"] Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.533436 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cf664b6c9-t6wfc"] Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.565665 4698 scope.go:117] "RemoveContainer" containerID="72d5c35add8dacbd016dbaa9022b4a42eb5852c123f9e5005edf0df08b10bc79" Oct 14 10:15:17 crc kubenswrapper[4698]: W1014 10:15:17.606646 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0082817f_4bcf_434b_8fb7_1e8ae2acf058.slice/crio-2b478c716c32d65735d8a4e9faaa722666dadf3186353664a241ffc11da3d04e WatchSource:0}: Error finding container 2b478c716c32d65735d8a4e9faaa722666dadf3186353664a241ffc11da3d04e: Status 404 returned error can't find the container with id 2b478c716c32d65735d8a4e9faaa722666dadf3186353664a241ffc11da3d04e Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.621380 4698 scope.go:117] "RemoveContainer" containerID="01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.632701 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.800851 4698 scope.go:117] "RemoveContainer" containerID="a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.844449 4698 scope.go:117] "RemoveContainer" containerID="01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa" Oct 14 10:15:17 crc kubenswrapper[4698]: E1014 10:15:17.845311 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa\": container with ID starting with 01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa not found: ID does not exist" containerID="01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.845355 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa"} err="failed to get container status \"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa\": rpc error: code = NotFound desc = could not find container \"01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa\": container with ID starting with 01151292314f1cceb68d555254fd933620a67e54f6178273863d8cf097cd36fa not found: ID does not exist" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.845411 4698 scope.go:117] "RemoveContainer" containerID="a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5" Oct 14 10:15:17 crc kubenswrapper[4698]: E1014 10:15:17.845755 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5\": container with ID starting with a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5 not found: ID does not exist" containerID="a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5" Oct 14 10:15:17 crc kubenswrapper[4698]: I1014 10:15:17.845821 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5"} err="failed to get container status \"a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5\": rpc error: code = NotFound desc = could not find container \"a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5\": container with ID starting with a8e8be342f14796918ca627b4fefb824e1335aba0789f699207169607765aec5 not found: ID does not exist" Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.370662 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerID="5d738c6fa016f4c382ae548edd0c96d7f3832727c5cba761ed5948bcbcebfdc2" exitCode=0 Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.370734 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerDied","Data":"5d738c6fa016f4c382ae548edd0c96d7f3832727c5cba761ed5948bcbcebfdc2"} Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.375648 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerID="e594988c974ff543f9c2d584ca251398c27ef372098425e5b40ffa91a25c8a5c" exitCode=0 Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.375727 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerDied","Data":"e594988c974ff543f9c2d584ca251398c27ef372098425e5b40ffa91a25c8a5c"} Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.377569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cf664b6c9-t6wfc" event={"ID":"0082817f-4bcf-434b-8fb7-1e8ae2acf058","Type":"ContainerStarted","Data":"fb18a33baed5d0cf957dc99d47ee52d1c0ad4fea2d10345a3b6725ee5aeaf31d"} Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.377605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cf664b6c9-t6wfc" event={"ID":"0082817f-4bcf-434b-8fb7-1e8ae2acf058","Type":"ContainerStarted","Data":"7825097aaba1b11ea9450578055bf1fc7b32db54aa84180499650be94466be32"} Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.377615 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cf664b6c9-t6wfc" event={"ID":"0082817f-4bcf-434b-8fb7-1e8ae2acf058","Type":"ContainerStarted","Data":"2b478c716c32d65735d8a4e9faaa722666dadf3186353664a241ffc11da3d04e"} Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.377747 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:18 crc kubenswrapper[4698]: I1014 10:15:18.401575 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cf664b6c9-t6wfc" podStartSLOduration=10.40154979 podStartE2EDuration="10.40154979s" podCreationTimestamp="2025-10-14 10:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:18.396275688 +0000 UTC m=+1100.093575134" watchObservedRunningTime="2025-10-14 10:15:18.40154979 +0000 UTC m=+1100.098849226" Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.033860 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" path="/var/lib/kubelet/pods/2d11f65a-1351-4490-842c-259c6611ed6f/volumes" Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.035041 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" path="/var/lib/kubelet/pods/ca4258ec-6a3b-414c-9556-4ce7c99349bd/volumes" Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.387378 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nbmlr" event={"ID":"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9","Type":"ContainerStarted","Data":"516748dc080d0d3ca18d60b1377deb6295eabfe3258bff566e3488259b4398e8"} Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.389002 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-thrh8" event={"ID":"d90a3be7-6827-427d-9ed1-3aef79542b6d","Type":"ContainerStarted","Data":"0b854e12c3d3df6ea816a435c6f8734727bf7005c4f78c7015d0ffd8ea25cf1a"} Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.408884 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nbmlr" podStartSLOduration=3.396328642 podStartE2EDuration="43.408871024s" podCreationTimestamp="2025-10-14 10:14:36 +0000 UTC" firstStartedPulling="2025-10-14 10:14:38.654729466 +0000 UTC m=+1060.352028882" lastFinishedPulling="2025-10-14 10:15:18.667271828 +0000 UTC m=+1100.364571264" observedRunningTime="2025-10-14 10:15:19.405974311 +0000 UTC m=+1101.103273727" watchObservedRunningTime="2025-10-14 10:15:19.408871024 +0000 UTC m=+1101.106170440" Oct 14 10:15:19 crc kubenswrapper[4698]: I1014 10:15:19.429554 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-thrh8" podStartSLOduration=8.48185885 podStartE2EDuration="48.429542178s" podCreationTimestamp="2025-10-14 10:14:31 +0000 UTC" firstStartedPulling="2025-10-14 10:14:38.628015608 +0000 UTC m=+1060.325315024" lastFinishedPulling="2025-10-14 10:15:18.575698926 +0000 UTC m=+1100.272998352" observedRunningTime="2025-10-14 10:15:19.423881515 +0000 UTC m=+1101.121180931" watchObservedRunningTime="2025-10-14 10:15:19.429542178 +0000 UTC m=+1101.126841594" Oct 14 10:15:20 crc kubenswrapper[4698]: I1014 10:15:20.058740 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:15:20 crc kubenswrapper[4698]: I1014 10:15:20.402988 4698 generic.go:334] "Generic (PLEG): container finished" podID="b034a777-04ce-4fe1-baf0-7dd68c64b31f" containerID="6fd413d50ddc394ba6745ff4d2dadf33650b7992176640e1fd078ba7836add91" exitCode=0 Oct 14 10:15:20 crc kubenswrapper[4698]: I1014 10:15:20.403100 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-qfgt5" event={"ID":"b034a777-04ce-4fe1-baf0-7dd68c64b31f","Type":"ContainerDied","Data":"6fd413d50ddc394ba6745ff4d2dadf33650b7992176640e1fd078ba7836add91"} Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.417277 4698 generic.go:334] "Generic (PLEG): container finished" podID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" containerID="516748dc080d0d3ca18d60b1377deb6295eabfe3258bff566e3488259b4398e8" exitCode=0 Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.417593 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nbmlr" event={"ID":"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9","Type":"ContainerDied","Data":"516748dc080d0d3ca18d60b1377deb6295eabfe3258bff566e3488259b4398e8"} Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.869842 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-qfgt5" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.892456 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data\") pod \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.892566 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data\") pod \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.892597 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqbtr\" (UniqueName: \"kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr\") pod \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.892642 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle\") pod \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\" (UID: \"b034a777-04ce-4fe1-baf0-7dd68c64b31f\") " Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.920028 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "b034a777-04ce-4fe1-baf0-7dd68c64b31f" (UID: "b034a777-04ce-4fe1-baf0-7dd68c64b31f"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.921727 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr" (OuterVolumeSpecName: "kube-api-access-bqbtr") pod "b034a777-04ce-4fe1-baf0-7dd68c64b31f" (UID: "b034a777-04ce-4fe1-baf0-7dd68c64b31f"). InnerVolumeSpecName "kube-api-access-bqbtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.924781 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b034a777-04ce-4fe1-baf0-7dd68c64b31f" (UID: "b034a777-04ce-4fe1-baf0-7dd68c64b31f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.925312 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data" (OuterVolumeSpecName: "config-data") pod "b034a777-04ce-4fe1-baf0-7dd68c64b31f" (UID: "b034a777-04ce-4fe1-baf0-7dd68c64b31f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.994323 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.994358 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.994369 4698 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/b034a777-04ce-4fe1-baf0-7dd68c64b31f-job-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:21 crc kubenswrapper[4698]: I1014 10:15:21.994385 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqbtr\" (UniqueName: \"kubernetes.io/projected/b034a777-04ce-4fe1-baf0-7dd68c64b31f-kube-api-access-bqbtr\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.434712 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-qfgt5" event={"ID":"b034a777-04ce-4fe1-baf0-7dd68c64b31f","Type":"ContainerDied","Data":"b49f9ffc38306b334d4626286a925805e245df79ea946cab8f17f14be3848562"} Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.434789 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b49f9ffc38306b334d4626286a925805e245df79ea946cab8f17f14be3848562" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.434798 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-qfgt5" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.782180 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.783373 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="init" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.786453 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="init" Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.786566 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="dnsmasq-dns" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.786654 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="dnsmasq-dns" Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.786815 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.786894 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon" Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.787038 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b034a777-04ce-4fe1-baf0-7dd68c64b31f" containerName="manila-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.787136 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b034a777-04ce-4fe1-baf0-7dd68c64b31f" containerName="manila-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.787238 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon-log" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.787316 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon-log" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.787762 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4258ec-6a3b-414c-9556-4ce7c99349bd" containerName="dnsmasq-dns" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.788013 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b034a777-04ce-4fe1-baf0-7dd68c64b31f" containerName="manila-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.789644 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon-log" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.789747 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d11f65a-1351-4490-842c-259c6611ed6f" containerName="horizon" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.793415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.807664 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.808004 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-6b2gn" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.808194 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.808329 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.814960 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.875993 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.882236 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:15:22 crc kubenswrapper[4698]: E1014 10:15:22.882722 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" containerName="barbican-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.882739 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" containerName="barbican-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.882953 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" containerName="barbican-db-sync" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.883934 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.886890 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914036 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzjg\" (UniqueName: \"kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914184 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.914208 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.918592 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.938851 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.940515 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:22 crc kubenswrapper[4698]: I1014 10:15:22.968758 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.016009 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle\") pod \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.016396 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq6kt\" (UniqueName: \"kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt\") pod \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.016482 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data\") pod \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\" (UID: \"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9\") " Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.016850 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.016925 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017015 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017085 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017160 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017252 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017323 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017479 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqzjg\" (UniqueName: \"kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017563 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017749 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2d8p\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017871 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.017955 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.020024 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.027021 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt" (OuterVolumeSpecName: "kube-api-access-wq6kt") pod "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" (UID: "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9"). InnerVolumeSpecName "kube-api-access-wq6kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.027265 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" (UID: "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.027686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.031121 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.032066 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.035673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.039000 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqzjg\" (UniqueName: \"kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg\") pod \"manila-scheduler-0\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.093311 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" (UID: "94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.121996 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122129 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcjd\" (UniqueName: \"kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122156 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122207 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122261 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122344 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122364 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122550 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2d8p\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122582 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122611 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122644 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122689 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122700 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq6kt\" (UniqueName: \"kubernetes.io/projected/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-kube-api-access-wq6kt\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.122711 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.128588 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.128729 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.129542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.130198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.136255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.137314 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.137430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.170739 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2d8p\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p\") pod \"manila-share-share1-0\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.204868 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.221532 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224225 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224289 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcjd\" (UniqueName: \"kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224487 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.224523 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.225064 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.225455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.225941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.228288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.229743 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.241320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcjd\" (UniqueName: \"kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd\") pod \"dnsmasq-dns-dc6b4865f-29kvv\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.261382 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.278737 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.280968 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.281151 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.283466 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428668 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428704 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428731 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428778 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428817 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.428839 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99q9p\" (UniqueName: \"kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.464917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nbmlr" event={"ID":"94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9","Type":"ContainerDied","Data":"f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335"} Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.464957 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7717ce2966c0b444bd0e92482630cf47f885c4647a52fa5a5c3cd73b133e335" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.465038 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nbmlr" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.469086 4698 generic.go:334] "Generic (PLEG): container finished" podID="d90a3be7-6827-427d-9ed1-3aef79542b6d" containerID="0b854e12c3d3df6ea816a435c6f8734727bf7005c4f78c7015d0ffd8ea25cf1a" exitCode=0 Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.469139 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-thrh8" event={"ID":"d90a3be7-6827-427d-9ed1-3aef79542b6d","Type":"ContainerDied","Data":"0b854e12c3d3df6ea816a435c6f8734727bf7005c4f78c7015d0ffd8ea25cf1a"} Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530079 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530432 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99q9p\" (UniqueName: \"kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530543 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530581 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530609 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.530647 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.531388 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.532029 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.549396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.552396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.557979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99q9p\" (UniqueName: \"kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.559534 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.559830 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data\") pod \"manila-api-0\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.604288 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.609229 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77cb48f668-xz2r9"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.616466 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.620157 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.620241 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.620486 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dt4rj" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.631327 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77cb48f668-xz2r9"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.719932 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-74bfd556cc-6z8fb"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.721668 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.728047 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.734909 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-combined-ca-bundle\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.735007 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.735056 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-logs\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.735124 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwpd\" (UniqueName: \"kubernetes.io/projected/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-kube-api-access-bwwpd\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.735154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data-custom\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.757874 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74bfd556cc-6z8fb"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.773971 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.834515 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837316 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwpd\" (UniqueName: \"kubernetes.io/projected/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-kube-api-access-bwwpd\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data-custom\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837439 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-combined-ca-bundle\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837497 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data-custom\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837521 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-logs\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837539 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcmm\" (UniqueName: \"kubernetes.io/projected/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-kube-api-access-kwcmm\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837561 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-logs\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.837635 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-combined-ca-bundle\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.846580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-logs\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.849931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data-custom\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.851578 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-config-data\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.852175 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-combined-ca-bundle\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.862883 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.864744 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.883848 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.908796 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwpd\" (UniqueName: \"kubernetes.io/projected/3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606-kube-api-access-bwwpd\") pod \"barbican-keystone-listener-77cb48f668-xz2r9\" (UID: \"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606\") " pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.919127 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.920851 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.924075 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.932849 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.939393 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.939524 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data-custom\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.939626 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-logs\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.939704 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcmm\" (UniqueName: \"kubernetes.io/projected/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-kube-api-access-kwcmm\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.939853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-combined-ca-bundle\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.940458 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-logs\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.958499 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data-custom\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.962397 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-config-data\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.964137 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-combined-ca-bundle\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.969702 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcmm\" (UniqueName: \"kubernetes.io/projected/27f5b9bc-1a92-40a7-b615-7c8a726cd2e8-kube-api-access-kwcmm\") pod \"barbican-worker-74bfd556cc-6z8fb\" (UID: \"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8\") " pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:23 crc kubenswrapper[4698]: I1014 10:15:23.994366 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.012415 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.041732 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042045 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042152 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6p5b\" (UniqueName: \"kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042274 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn48w\" (UniqueName: \"kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.042319 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.044127 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.049026 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-74bfd556cc-6z8fb" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.146912 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6p5b\" (UniqueName: \"kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.146988 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn48w\" (UniqueName: \"kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147022 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147041 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147094 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147118 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147226 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147263 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.147282 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.149273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.149323 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.149634 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.150088 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.168524 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.170564 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.189471 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.199799 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6p5b\" (UniqueName: \"kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.200140 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.205573 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom\") pod \"barbican-api-6cc5fb5d9d-rbb5x\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.206872 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn48w\" (UniqueName: \"kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w\") pod \"dnsmasq-dns-84449c85c5-m7bl7\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.227779 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.445590 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.475268 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.499646 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" event={"ID":"0f82463f-d211-4e22-8742-570c0293871c","Type":"ContainerStarted","Data":"9b593facd573045e2686a7054d663195de76de56e0cd84b8588ec9d35fd28391"} Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.512095 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerStarted","Data":"972f1ca10b061eb6f2a1c3058fb5cc87bf89f33441aa3146ea7635439502a171"} Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.514887 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerStarted","Data":"bbfbe01a6fd5441148ce523b0b9fc96e1458a7a66fd30a0e5e924c56fa020b6e"} Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.516824 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerStarted","Data":"93f98572e35b5ef10429684e2eb53e0f047367956fce7b2b00fcd9e6b2ae9e44"} Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.792128 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77cb48f668-xz2r9"] Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.930157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-74bfd556cc-6z8fb"] Oct 14 10:15:24 crc kubenswrapper[4698]: I1014 10:15:24.966822 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.319600 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-thrh8" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.322619 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:25 crc kubenswrapper[4698]: W1014 10:15:25.324181 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2fa04f1_6caf_4c05_b1fe_93b63ef9f908.slice/crio-433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8 WatchSource:0}: Error finding container 433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8: Status 404 returned error can't find the container with id 433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8 Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.415345 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.415691 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.415807 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.415944 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89d95\" (UniqueName: \"kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.415983 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.416036 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data\") pod \"d90a3be7-6827-427d-9ed1-3aef79542b6d\" (UID: \"d90a3be7-6827-427d-9ed1-3aef79542b6d\") " Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.416065 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.417725 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d90a3be7-6827-427d-9ed1-3aef79542b6d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.420331 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95" (OuterVolumeSpecName: "kube-api-access-89d95") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "kube-api-access-89d95". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.422463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.425010 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts" (OuterVolumeSpecName: "scripts") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.486171 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.518022 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data" (OuterVolumeSpecName: "config-data") pod "d90a3be7-6827-427d-9ed1-3aef79542b6d" (UID: "d90a3be7-6827-427d-9ed1-3aef79542b6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.520609 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89d95\" (UniqueName: \"kubernetes.io/projected/d90a3be7-6827-427d-9ed1-3aef79542b6d-kube-api-access-89d95\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.520659 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.520671 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.520681 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.520714 4698 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d90a3be7-6827-427d-9ed1-3aef79542b6d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.536244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" event={"ID":"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908","Type":"ContainerStarted","Data":"433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.537904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" event={"ID":"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606","Type":"ContainerStarted","Data":"35a86c89ee248966a83c21d91d46b8fcbf1a142da2681ce99ed35d477f57dcc7"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.539709 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bfd556cc-6z8fb" event={"ID":"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8","Type":"ContainerStarted","Data":"1b204004cb556878049c2b7817551a3e0e9aba3b832aa6da1ca8c52152e9b911"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.543541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-thrh8" event={"ID":"d90a3be7-6827-427d-9ed1-3aef79542b6d","Type":"ContainerDied","Data":"24165701a8b96f6ae9ba2d2394d8e77ae275b7d55dcb3a16d3845f01c767eaec"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.543566 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24165701a8b96f6ae9ba2d2394d8e77ae275b7d55dcb3a16d3845f01c767eaec" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.543620 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-thrh8" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.548370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerStarted","Data":"82e93df7e65a6a821e7024f0f7e4837cd3241a9c9ccad476a07e5d3158b42f9c"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.561839 4698 generic.go:334] "Generic (PLEG): container finished" podID="0f82463f-d211-4e22-8742-570c0293871c" containerID="08397db95eaf47a46e29220988fb8b68cc818ea82f80d8856ad0cf2f47d3874a" exitCode=0 Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.561915 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" event={"ID":"0f82463f-d211-4e22-8742-570c0293871c","Type":"ContainerDied","Data":"08397db95eaf47a46e29220988fb8b68cc818ea82f80d8856ad0cf2f47d3874a"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.576138 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerStarted","Data":"38d577e303af2cd9edc188a6f03459306b89b4bbb79048439a6dce9fa059d51d"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.576201 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerStarted","Data":"ff44068c979e004de88a9cfe9626f6babdcd033d033f521bc691bd7321f6fa5b"} Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.738942 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:25 crc kubenswrapper[4698]: E1014 10:15:25.739849 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" containerName="cinder-db-sync" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.740060 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" containerName="cinder-db-sync" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.740273 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" containerName="cinder-db-sync" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.741408 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.746837 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-q7jcm" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.752790 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.753019 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.753138 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.758441 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.835781 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.835827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.835893 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.835927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8pb\" (UniqueName: \"kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.835987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.836050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.860547 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.864643 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.868688 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.875848 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939217 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939333 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939376 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939391 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939438 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939457 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939477 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939507 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939538 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939578 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdhb\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939602 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939620 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939648 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939674 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn8pb\" (UniqueName: \"kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939750 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.939826 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.945202 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.945818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.959658 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.974711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:25 crc kubenswrapper[4698]: I1014 10:15:25.977932 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.007641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn8pb\" (UniqueName: \"kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb\") pod \"cinder-scheduler-0\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.009166 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041307 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041332 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041349 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041301 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041482 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041580 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041692 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041743 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041870 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdhb\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.041971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.042009 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.042194 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.044550 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.045980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046075 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046171 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046262 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.046586 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.052706 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.058788 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.060690 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.066289 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.067306 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.069676 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.073365 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.078708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdhb\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb\") pod \"cinder-volume-volume1-0\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.094256 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.098071 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.102111 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.107070 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144121 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144306 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144418 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144508 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144610 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144704 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.144903 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145081 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145186 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145467 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145684 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbq87\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.145962 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.147783 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.147917 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148016 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148083 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rqs\" (UniqueName: \"kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148150 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148410 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.148708 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.157237 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.158308 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.184757 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.202834 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.205239 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.208689 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.218538 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.254810 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258140 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbq87\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258217 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tfh\" (UniqueName: \"kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258299 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258558 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258636 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258744 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259500 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259588 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rqs\" (UniqueName: \"kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259729 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.259939 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260045 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260169 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260255 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260365 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260561 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260731 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.260984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.261061 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.261199 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.261293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.261487 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.262100 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.263327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.263462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.258981 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.264811 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.266003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.266220 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.266941 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.267115 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.267059 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.267852 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.267967 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.268272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.267060 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.270108 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.270532 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.270742 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.271424 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.273213 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.281345 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.298719 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.299558 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.302711 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rqs\" (UniqueName: \"kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs\") pod \"dnsmasq-dns-5865f9d689-hbs4g\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.305390 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbq87\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87\") pod \"cinder-backup-0\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.364661 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.364799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.364971 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365003 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365060 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trcjd\" (UniqueName: \"kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365087 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config\") pod \"0f82463f-d211-4e22-8742-570c0293871c\" (UID: \"0f82463f-d211-4e22-8742-570c0293871c\") " Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365547 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365599 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tfh\" (UniqueName: \"kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365615 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365657 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.365689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.371363 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.373212 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.387474 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.391982 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd" (OuterVolumeSpecName: "kube-api-access-trcjd") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "kube-api-access-trcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.392841 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.420451 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.423905 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.434910 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tfh\" (UniqueName: \"kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh\") pod \"cinder-api-0\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.467444 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trcjd\" (UniqueName: \"kubernetes.io/projected/0f82463f-d211-4e22-8742-570c0293871c-kube-api-access-trcjd\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.473575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.475959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.511457 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config" (OuterVolumeSpecName: "config") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.530642 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.531343 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.538254 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f82463f-d211-4e22-8742-570c0293871c" (UID: "0f82463f-d211-4e22-8742-570c0293871c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.556472 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.571017 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.571056 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.571068 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.571082 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.571094 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f82463f-d211-4e22-8742-570c0293871c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.586409 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.751015 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerStarted","Data":"ee066c20559aa379186cf2d3fd6cdba8d8419672125be4906e870235e29e4982"} Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.752468 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.756472 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75d9cb9c4-g8g58" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.794940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" event={"ID":"0f82463f-d211-4e22-8742-570c0293871c","Type":"ContainerDied","Data":"9b593facd573045e2686a7054d663195de76de56e0cd84b8588ec9d35fd28391"} Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.796891 4698 scope.go:117] "RemoveContainer" containerID="08397db95eaf47a46e29220988fb8b68cc818ea82f80d8856ad0cf2f47d3874a" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.796021 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.804893 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.80486667 podStartE2EDuration="3.80486667s" podCreationTimestamp="2025-10-14 10:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:26.776028701 +0000 UTC m=+1108.473328117" watchObservedRunningTime="2025-10-14 10:15:26.80486667 +0000 UTC m=+1108.502166086" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.858385 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerStarted","Data":"98a62b4299ff785f4988afd479c08919e2e745c55cdb25d37f55f7fd1d73e0fc"} Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.885534 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.891733 4698 generic.go:334] "Generic (PLEG): container finished" podID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" containerID="5c9805414f9497fd945631174a09b08928cb189e2af428a18a49d6e8cc4cc84c" exitCode=0 Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.891850 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" event={"ID":"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908","Type":"ContainerDied","Data":"5c9805414f9497fd945631174a09b08928cb189e2af428a18a49d6e8cc4cc84c"} Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.894926 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" podStartSLOduration=3.894903178 podStartE2EDuration="3.894903178s" podCreationTimestamp="2025-10-14 10:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:26.880480913 +0000 UTC m=+1108.577780329" watchObservedRunningTime="2025-10-14 10:15:26.894903178 +0000 UTC m=+1108.592202594" Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.930726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerStarted","Data":"ff73b0bec1c9849156835b3e93c6b4d50ee38d4d5f4c9f164b7a79c91d7252eb"} Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.952578 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerID="1afee696e504730f0049af836091b4f76382bfbb64f7e1b9e1e6d76a7978bdc7" exitCode=0 Oct 14 10:15:26 crc kubenswrapper[4698]: I1014 10:15:26.952958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerDied","Data":"1afee696e504730f0049af836091b4f76382bfbb64f7e1b9e1e6d76a7978bdc7"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.095110 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.340455 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:15:27 crc kubenswrapper[4698]: E1014 10:15:27.372652 4698 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 14 10:15:27 crc kubenswrapper[4698]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 10:15:27 crc kubenswrapper[4698]: > podSandboxID="433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8" Oct 14 10:15:27 crc kubenswrapper[4698]: E1014 10:15:27.373516 4698 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 10:15:27 crc kubenswrapper[4698]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644h584h59ch5bbhb9h54dhc5h5d8h7hbchc9h5b5h64bh646h65fh56chdh7fh648h68bh66dh546h5b8h7bh5f7h74h67h4h689h68dh84h5b7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn48w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84449c85c5-m7bl7_openstack(c2fa04f1-6caf-4c05-b1fe-93b63ef9f908): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 10:15:27 crc kubenswrapper[4698]: > logger="UnhandledError" Oct 14 10:15:27 crc kubenswrapper[4698]: E1014 10:15:27.374667 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" podUID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.489226 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:27 crc kubenswrapper[4698]: W1014 10:15:27.578054 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod523471ca_f061_410d_81ce_fbfd00b79bca.slice/crio-a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c WatchSource:0}: Error finding container a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c: Status 404 returned error can't find the container with id a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.746246 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.978189 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerStarted","Data":"ad403fbe26ee7b1bd03e68d72a3379c94d31e0ccf050049e7a86ce38a41c55da"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.983363 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerStarted","Data":"136185543d9ec97a7a63282756aeac7779ba5da2123cd5f96e04d141c9e827a6"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.985921 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerStarted","Data":"476b9f790d47d03be8b3aba5a67d6293750822c644e745713ac61a5656f4de47"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.987647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" event={"ID":"a3ae3570-e56e-4c49-ad97-56e83b3f9d01","Type":"ContainerStarted","Data":"8615d86778f8674450679f42b414dd4556bfeb521e23c6b895ee2e7000774ccf"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.997010 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerStarted","Data":"a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c"} Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.997062 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:27 crc kubenswrapper[4698]: I1014 10:15:27.997077 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:28 crc kubenswrapper[4698]: I1014 10:15:28.011139 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.59958057 podStartE2EDuration="6.011116342s" podCreationTimestamp="2025-10-14 10:15:22 +0000 UTC" firstStartedPulling="2025-10-14 10:15:23.805471907 +0000 UTC m=+1105.502771323" lastFinishedPulling="2025-10-14 10:15:25.217007679 +0000 UTC m=+1106.914307095" observedRunningTime="2025-10-14 10:15:28.007056215 +0000 UTC m=+1109.704355641" watchObservedRunningTime="2025-10-14 10:15:28.011116342 +0000 UTC m=+1109.708415758" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.015439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" event={"ID":"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908","Type":"ContainerDied","Data":"433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8"} Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.015931 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433ebc8d18d400e911af525c7858f729a9a2c7102e51e8ead56a6cd1f0e8f4a8" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.022233 4698 generic.go:334] "Generic (PLEG): container finished" podID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerID="065f36881f9a5193382c59d72d246a875b15685c3336208ee962e71910090f9e" exitCode=0 Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.038618 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerStarted","Data":"392bf8bde2dc15f64c730ffdfa79b77546512f2191b07f8a74ed03b168b51716"} Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.038653 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" event={"ID":"a3ae3570-e56e-4c49-ad97-56e83b3f9d01","Type":"ContainerDied","Data":"065f36881f9a5193382c59d72d246a875b15685c3336208ee962e71910090f9e"} Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.073666 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163377 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163424 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163514 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163535 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn48w\" (UniqueName: \"kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163617 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.163731 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc\") pod \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\" (UID: \"c2fa04f1-6caf-4c05-b1fe-93b63ef9f908\") " Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.210688 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w" (OuterVolumeSpecName: "kube-api-access-qn48w") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "kube-api-access-qn48w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.266313 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn48w\" (UniqueName: \"kubernetes.io/projected/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-kube-api-access-qn48w\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.505316 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.508059 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config" (OuterVolumeSpecName: "config") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.514583 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.519444 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.528606 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" (UID: "c2fa04f1-6caf-4c05-b1fe-93b63ef9f908"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.579362 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.579392 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.579401 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.579413 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.579422 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:29 crc kubenswrapper[4698]: I1014 10:15:29.837233 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.060966 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.092631 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.125570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bfd556cc-6z8fb" event={"ID":"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8","Type":"ContainerStarted","Data":"07aa5fcf078f3123de577c3e4ef8a91a0ec88dc274a021662a27c729987b1524"} Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.130655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerStarted","Data":"8f50482a824b1752d07f7fd427a302b4bce24b0029bc074ff51d5881f64fd85b"} Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.143146 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" event={"ID":"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606","Type":"ContainerStarted","Data":"1fa82cb5cfe35568a66fb475423a539425c43aac515bf044ce38d98a1a8a0597"} Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.144912 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84449c85c5-m7bl7" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.145079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerStarted","Data":"d092922ecb801a8a918acb82b0cdf4e3360c2dbaa168f36e409c0b47d66f5378"} Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.145393 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api-log" containerID="cri-o://82e93df7e65a6a821e7024f0f7e4837cd3241a9c9ccad476a07e5d3158b42f9c" gracePeriod=30 Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.145591 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api" containerID="cri-o://ee066c20559aa379186cf2d3fd6cdba8d8419672125be4906e870235e29e4982" gracePeriod=30 Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.218303 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 10:15:30 crc kubenswrapper[4698]: E1014 10:15:30.218757 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.218785 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: E1014 10:15:30.218807 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f82463f-d211-4e22-8742-570c0293871c" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.218814 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f82463f-d211-4e22-8742-570c0293871c" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.218985 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.219005 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f82463f-d211-4e22-8742-570c0293871c" containerName="init" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.219604 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.227100 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-s4q6k" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.227505 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.238484 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.254422 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.342698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7572f\" (UniqueName: \"kubernetes.io/projected/9b9ad197-b532-42c9-8ac2-c822cca96a52-kube-api-access-7572f\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.342797 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.342846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.343037 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.404579 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.420532 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84449c85c5-m7bl7"] Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.446027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.446187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7572f\" (UniqueName: \"kubernetes.io/projected/9b9ad197-b532-42c9-8ac2-c822cca96a52-kube-api-access-7572f\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.446231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.446256 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.447577 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.458134 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.462298 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ad197-b532-42c9-8ac2-c822cca96a52-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.513617 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7572f\" (UniqueName: \"kubernetes.io/projected/9b9ad197-b532-42c9-8ac2-c822cca96a52-kube-api-access-7572f\") pod \"openstackclient\" (UID: \"9b9ad197-b532-42c9-8ac2-c822cca96a52\") " pod="openstack/openstackclient" Oct 14 10:15:30 crc kubenswrapper[4698]: I1014 10:15:30.603267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.038243 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2fa04f1-6caf-4c05-b1fe-93b63ef9f908" path="/var/lib/kubelet/pods/c2fa04f1-6caf-4c05-b1fe-93b63ef9f908/volumes" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203334 4698 generic.go:334] "Generic (PLEG): container finished" podID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerID="ee066c20559aa379186cf2d3fd6cdba8d8419672125be4906e870235e29e4982" exitCode=0 Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203655 4698 generic.go:334] "Generic (PLEG): container finished" podID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerID="82e93df7e65a6a821e7024f0f7e4837cd3241a9c9ccad476a07e5d3158b42f9c" exitCode=143 Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203529 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerDied","Data":"ee066c20559aa379186cf2d3fd6cdba8d8419672125be4906e870235e29e4982"} Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203753 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerDied","Data":"82e93df7e65a6a821e7024f0f7e4837cd3241a9c9ccad476a07e5d3158b42f9c"} Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e","Type":"ContainerDied","Data":"93f98572e35b5ef10429684e2eb53e0f047367956fce7b2b00fcd9e6b2ae9e44"} Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.203799 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f98572e35b5ef10429684e2eb53e0f047367956fce7b2b00fcd9e6b2ae9e44" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.240183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" event={"ID":"a3ae3570-e56e-4c49-ad97-56e83b3f9d01","Type":"ContainerStarted","Data":"63b423609078a3f518917b93787cc20d0955c23665e5aaa1cd8fb65bb08f8d5a"} Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.241718 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.252236 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" event={"ID":"3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606","Type":"ContainerStarted","Data":"c660d126889b06e09991c8c45e5b89d442304ca6c5b7d36aa4b36315a8698a1e"} Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.263953 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" podStartSLOduration=6.263933578 podStartE2EDuration="6.263933578s" podCreationTimestamp="2025-10-14 10:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:31.257715169 +0000 UTC m=+1112.955014585" watchObservedRunningTime="2025-10-14 10:15:31.263933578 +0000 UTC m=+1112.961232994" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.300956 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77cb48f668-xz2r9" podStartSLOduration=4.327139972 podStartE2EDuration="8.300938802s" podCreationTimestamp="2025-10-14 10:15:23 +0000 UTC" firstStartedPulling="2025-10-14 10:15:24.868116751 +0000 UTC m=+1106.565416167" lastFinishedPulling="2025-10-14 10:15:28.841915581 +0000 UTC m=+1110.539214997" observedRunningTime="2025-10-14 10:15:31.275237533 +0000 UTC m=+1112.972536949" watchObservedRunningTime="2025-10-14 10:15:31.300938802 +0000 UTC m=+1112.998238218" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.311147 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-74bfd556cc-6z8fb" podStartSLOduration=4.442421576 podStartE2EDuration="8.311130415s" podCreationTimestamp="2025-10-14 10:15:23 +0000 UTC" firstStartedPulling="2025-10-14 10:15:24.971975286 +0000 UTC m=+1106.669274692" lastFinishedPulling="2025-10-14 10:15:28.840684115 +0000 UTC m=+1110.537983531" observedRunningTime="2025-10-14 10:15:31.297203075 +0000 UTC m=+1112.994502501" watchObservedRunningTime="2025-10-14 10:15:31.311130415 +0000 UTC m=+1113.008429831" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.376061 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.397643 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.489588 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.489956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.490022 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.490045 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.490263 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.490353 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99q9p\" (UniqueName: \"kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.490390 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs\") pod \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\" (UID: \"600d9dbf-48aa-4c54-9d47-9ffe8d383d6e\") " Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.493529 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs" (OuterVolumeSpecName: "logs") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.495508 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.503003 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p" (OuterVolumeSpecName: "kube-api-access-99q9p") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "kube-api-access-99q9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.508442 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.523604 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts" (OuterVolumeSpecName: "scripts") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.593600 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.593656 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.593668 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.593677 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99q9p\" (UniqueName: \"kubernetes.io/projected/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-kube-api-access-99q9p\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.593688 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.660226 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.698686 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.822285 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data" (OuterVolumeSpecName: "config-data") pod "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" (UID: "600d9dbf-48aa-4c54-9d47-9ffe8d383d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:31 crc kubenswrapper[4698]: I1014 10:15:31.902515 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.295919 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerStarted","Data":"200ca86c8378d80fc5d18f46220f35a1589a1a5597b5a8d22d1f5d17be13a897"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.296347 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerStarted","Data":"c8ed817c301154ab39e341849c0d3d8e7db3d34360d2b0b52e5a24a735e66584"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.317710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerStarted","Data":"d339aa902ab22c31c1921b5bff6d477591b762bae03bf9901cabae11bd29ee57"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.323959 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerStarted","Data":"5d5aa297aa05d4045d4baa23201aa34efedc024e6eb6303918a106e11d3fc92a"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.324023 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerStarted","Data":"d97a24ce30967412fecd6ae2a41002bfa33633e4fbb9e84bfdc0a51943c5a293"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.339792 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-74bfd556cc-6z8fb" event={"ID":"27f5b9bc-1a92-40a7-b615-7c8a726cd2e8","Type":"ContainerStarted","Data":"e2fdb24a4d9c197df271012d04a4d873edec8c8fbf62473d294fdef94fe0b415"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.352658 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.574722914 podStartE2EDuration="7.35262209s" podCreationTimestamp="2025-10-14 10:15:25 +0000 UTC" firstStartedPulling="2025-10-14 10:15:27.175261676 +0000 UTC m=+1108.872561092" lastFinishedPulling="2025-10-14 10:15:29.953160852 +0000 UTC m=+1111.650460268" observedRunningTime="2025-10-14 10:15:32.330430512 +0000 UTC m=+1114.027729948" watchObservedRunningTime="2025-10-14 10:15:32.35262209 +0000 UTC m=+1114.049921506" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.353264 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerStarted","Data":"9133ec75b5805ccf389499bf4f0284068d8b3b503fa1bb02ea45803836fcbcc7"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.353581 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api-log" containerID="cri-o://8f50482a824b1752d07f7fd427a302b4bce24b0029bc074ff51d5881f64fd85b" gracePeriod=30 Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.353697 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.353726 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api" containerID="cri-o://9133ec75b5805ccf389499bf4f0284068d8b3b503fa1bb02ea45803836fcbcc7" gracePeriod=30 Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.364819 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9b9ad197-b532-42c9-8ac2-c822cca96a52","Type":"ContainerStarted","Data":"fa250ec2b4ce14c150cea251e7cf778fb08c01cba71b2cf0e5b6a2e56fdb740c"} Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.364956 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.386286 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=5.625666672 podStartE2EDuration="7.386264597s" podCreationTimestamp="2025-10-14 10:15:25 +0000 UTC" firstStartedPulling="2025-10-14 10:15:28.194985487 +0000 UTC m=+1109.892284903" lastFinishedPulling="2025-10-14 10:15:29.955583422 +0000 UTC m=+1111.652882828" observedRunningTime="2025-10-14 10:15:32.353479005 +0000 UTC m=+1114.050778421" watchObservedRunningTime="2025-10-14 10:15:32.386264597 +0000 UTC m=+1114.083564013" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.401035 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.212062526 podStartE2EDuration="7.401016991s" podCreationTimestamp="2025-10-14 10:15:25 +0000 UTC" firstStartedPulling="2025-10-14 10:15:26.982136525 +0000 UTC m=+1108.679435941" lastFinishedPulling="2025-10-14 10:15:28.17109099 +0000 UTC m=+1109.868390406" observedRunningTime="2025-10-14 10:15:32.378695839 +0000 UTC m=+1114.075995265" watchObservedRunningTime="2025-10-14 10:15:32.401016991 +0000 UTC m=+1114.098316407" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.428608 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.428585333 podStartE2EDuration="6.428585333s" podCreationTimestamp="2025-10-14 10:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:32.411028849 +0000 UTC m=+1114.108328265" watchObservedRunningTime="2025-10-14 10:15:32.428585333 +0000 UTC m=+1114.125884749" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.468489 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.492599 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.528083 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:32 crc kubenswrapper[4698]: E1014 10:15:32.528633 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api-log" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.528650 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api-log" Oct 14 10:15:32 crc kubenswrapper[4698]: E1014 10:15:32.528668 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.528677 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.528928 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api-log" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.528948 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" containerName="manila-api" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.530221 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.534953 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.535084 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.535213 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.592201 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620369 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqlgd\" (UniqueName: \"kubernetes.io/projected/99f5e356-0b01-4991-b2b2-3e0456eba2e7-kube-api-access-kqlgd\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-scripts\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f5e356-0b01-4991-b2b2-3e0456eba2e7-etc-machine-id\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-public-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620633 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620714 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620798 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f5e356-0b01-4991-b2b2-3e0456eba2e7-logs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.620826 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data-custom\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.727435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-scripts\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.729356 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.729397 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f5e356-0b01-4991-b2b2-3e0456eba2e7-etc-machine-id\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.729600 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-public-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.729638 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.729874 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.730036 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f5e356-0b01-4991-b2b2-3e0456eba2e7-etc-machine-id\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.734933 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f5e356-0b01-4991-b2b2-3e0456eba2e7-logs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.734977 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data-custom\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.735065 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqlgd\" (UniqueName: \"kubernetes.io/projected/99f5e356-0b01-4991-b2b2-3e0456eba2e7-kube-api-access-kqlgd\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.736073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f5e356-0b01-4991-b2b2-3e0456eba2e7-logs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.739355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-scripts\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.739419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.740182 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.747722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-public-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.750361 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-internal-tls-certs\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.751625 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f5e356-0b01-4991-b2b2-3e0456eba2e7-config-data-custom\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.761287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqlgd\" (UniqueName: \"kubernetes.io/projected/99f5e356-0b01-4991-b2b2-3e0456eba2e7-kube-api-access-kqlgd\") pod \"manila-api-0\" (UID: \"99f5e356-0b01-4991-b2b2-3e0456eba2e7\") " pod="openstack/manila-api-0" Oct 14 10:15:32 crc kubenswrapper[4698]: I1014 10:15:32.894407 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.047430 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600d9dbf-48aa-4c54-9d47-9ffe8d383d6e" path="/var/lib/kubelet/pods/600d9dbf-48aa-4c54-9d47-9ffe8d383d6e/volumes" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.212911 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.404636 4698 generic.go:334] "Generic (PLEG): container finished" podID="523471ca-f061-410d-81ce-fbfd00b79bca" containerID="9133ec75b5805ccf389499bf4f0284068d8b3b503fa1bb02ea45803836fcbcc7" exitCode=0 Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.404685 4698 generic.go:334] "Generic (PLEG): container finished" podID="523471ca-f061-410d-81ce-fbfd00b79bca" containerID="8f50482a824b1752d07f7fd427a302b4bce24b0029bc074ff51d5881f64fd85b" exitCode=143 Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.405995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerDied","Data":"9133ec75b5805ccf389499bf4f0284068d8b3b503fa1bb02ea45803836fcbcc7"} Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.406050 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerDied","Data":"8f50482a824b1752d07f7fd427a302b4bce24b0029bc074ff51d5881f64fd85b"} Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.406061 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"523471ca-f061-410d-81ce-fbfd00b79bca","Type":"ContainerDied","Data":"a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c"} Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.406071 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1784cbe3bf638af68ffbfb33b7409428c01c505c65204b9077529e79399842c" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.443531 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.552776 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553145 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553208 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8tfh\" (UniqueName: \"kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553352 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553399 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553466 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts\") pod \"523471ca-f061-410d-81ce-fbfd00b79bca\" (UID: \"523471ca-f061-410d-81ce-fbfd00b79bca\") " Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553522 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs" (OuterVolumeSpecName: "logs") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.553915 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.554299 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/523471ca-f061-410d-81ce-fbfd00b79bca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.554315 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523471ca-f061-410d-81ce-fbfd00b79bca-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.568457 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts" (OuterVolumeSpecName: "scripts") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.569923 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.570881 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh" (OuterVolumeSpecName: "kube-api-access-w8tfh") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "kube-api-access-w8tfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.618950 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.632083 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data" (OuterVolumeSpecName: "config-data") pod "523471ca-f061-410d-81ce-fbfd00b79bca" (UID: "523471ca-f061-410d-81ce-fbfd00b79bca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.656995 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.657021 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8tfh\" (UniqueName: \"kubernetes.io/projected/523471ca-f061-410d-81ce-fbfd00b79bca-kube-api-access-w8tfh\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.657032 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.657040 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.657050 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523471ca-f061-410d-81ce-fbfd00b79bca-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:33 crc kubenswrapper[4698]: I1014 10:15:33.722188 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 14 10:15:33 crc kubenswrapper[4698]: W1014 10:15:33.737638 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f5e356_0b01_4991_b2b2_3e0456eba2e7.slice/crio-bb9d34124033409604d3196c2a4c17dc15523c136d415bb07c6dcbd923025538 WatchSource:0}: Error finding container bb9d34124033409604d3196c2a4c17dc15523c136d415bb07c6dcbd923025538: Status 404 returned error can't find the container with id bb9d34124033409604d3196c2a4c17dc15523c136d415bb07c6dcbd923025538 Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.233732 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66df6b94fb-sw6kf"] Oct 14 10:15:34 crc kubenswrapper[4698]: E1014 10:15:34.241438 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api-log" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.241469 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api-log" Oct 14 10:15:34 crc kubenswrapper[4698]: E1014 10:15:34.241503 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.241510 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.241723 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api-log" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.241745 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" containerName="cinder-api" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.243053 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.249063 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.249361 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.272225 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66df6b94fb-sw6kf"] Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.282834 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data-custom\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.282931 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-internal-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.283025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvsf\" (UniqueName: \"kubernetes.io/projected/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-kube-api-access-8xvsf\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.283050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-logs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.283076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-public-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.283107 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-combined-ca-bundle\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.283128 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385317 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvsf\" (UniqueName: \"kubernetes.io/projected/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-kube-api-access-8xvsf\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385361 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-logs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385390 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-public-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385415 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-combined-ca-bundle\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385433 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385453 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data-custom\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.385515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-internal-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.389127 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-logs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.403149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-internal-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.407296 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-public-tls-certs\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.413448 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data-custom\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.413991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-combined-ca-bundle\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.414660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvsf\" (UniqueName: \"kubernetes.io/projected/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-kube-api-access-8xvsf\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.414927 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f476fe-d3af-4e73-bb7e-ff6a4919ccf7-config-data\") pod \"barbican-api-66df6b94fb-sw6kf\" (UID: \"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7\") " pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.477227 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.477254 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"99f5e356-0b01-4991-b2b2-3e0456eba2e7","Type":"ContainerStarted","Data":"bb9d34124033409604d3196c2a4c17dc15523c136d415bb07c6dcbd923025538"} Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.606322 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.707460 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.717477 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.737213 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.739601 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.744266 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.744502 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.744618 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.753731 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796223 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data-custom\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796276 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796361 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-scripts\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796416 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796485 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-public-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78a024b7-16f4-4177-8b52-0cecbc173247-logs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796634 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78a024b7-16f4-4177-8b52-0cecbc173247-etc-machine-id\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796712 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.796878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79r8\" (UniqueName: \"kubernetes.io/projected/78a024b7-16f4-4177-8b52-0cecbc173247-kube-api-access-m79r8\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.900881 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data-custom\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.900931 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.900982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-scripts\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901003 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901840 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-public-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901905 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78a024b7-16f4-4177-8b52-0cecbc173247-logs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901929 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78a024b7-16f4-4177-8b52-0cecbc173247-etc-machine-id\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901967 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.901995 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m79r8\" (UniqueName: \"kubernetes.io/projected/78a024b7-16f4-4177-8b52-0cecbc173247-kube-api-access-m79r8\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.902316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78a024b7-16f4-4177-8b52-0cecbc173247-etc-machine-id\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.902446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78a024b7-16f4-4177-8b52-0cecbc173247-logs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.913666 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.915342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-public-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.919812 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-scripts\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.921681 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.924747 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.931424 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78a024b7-16f4-4177-8b52-0cecbc173247-config-data-custom\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:34 crc kubenswrapper[4698]: I1014 10:15:34.934424 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m79r8\" (UniqueName: \"kubernetes.io/projected/78a024b7-16f4-4177-8b52-0cecbc173247-kube-api-access-m79r8\") pod \"cinder-api-0\" (UID: \"78a024b7-16f4-4177-8b52-0cecbc173247\") " pod="openstack/cinder-api-0" Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.044959 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523471ca-f061-410d-81ce-fbfd00b79bca" path="/var/lib/kubelet/pods/523471ca-f061-410d-81ce-fbfd00b79bca/volumes" Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.082546 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.264815 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66df6b94fb-sw6kf"] Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.524932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"99f5e356-0b01-4991-b2b2-3e0456eba2e7","Type":"ContainerStarted","Data":"c93720413dec6382f0b1ac582b6dce03f7f36ed75601744d441c9e8a1d9a5992"} Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.525274 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"99f5e356-0b01-4991-b2b2-3e0456eba2e7","Type":"ContainerStarted","Data":"32f8d5143153349ac769c8895870b0118e4a950c335bc86f9b6e17a760c69d24"} Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.525323 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.528570 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66df6b94fb-sw6kf" event={"ID":"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7","Type":"ContainerStarted","Data":"b8be4e69725f6e42d6651e4f8ed9bf05a63bfa2465e8aa29acc8d6ecaddb6a55"} Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.552385 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.552368111 podStartE2EDuration="3.552368111s" podCreationTimestamp="2025-10-14 10:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:35.544886746 +0000 UTC m=+1117.242186162" watchObservedRunningTime="2025-10-14 10:15:35.552368111 +0000 UTC m=+1117.249667527" Oct 14 10:15:35 crc kubenswrapper[4698]: I1014 10:15:35.780835 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 10:15:35 crc kubenswrapper[4698]: W1014 10:15:35.783332 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78a024b7_16f4_4177_8b52_0cecbc173247.slice/crio-715e8bd50f1db9ca394fa61fd3fb84bf77ec34f9fec2c65fe7dfa381a58aea32 WatchSource:0}: Error finding container 715e8bd50f1db9ca394fa61fd3fb84bf77ec34f9fec2c65fe7dfa381a58aea32: Status 404 returned error can't find the container with id 715e8bd50f1db9ca394fa61fd3fb84bf77ec34f9fec2c65fe7dfa381a58aea32 Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.161411 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.185796 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.315693 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.434386 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.518310 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.528339 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.532819 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.563252 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.564582 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"78a024b7-16f4-4177-8b52-0cecbc173247","Type":"ContainerStarted","Data":"715e8bd50f1db9ca394fa61fd3fb84bf77ec34f9fec2c65fe7dfa381a58aea32"} Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.574576 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66df6b94fb-sw6kf" event={"ID":"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7","Type":"ContainerStarted","Data":"178ebdfe9c9925a7774d36296873dbc2db93a178f2bb21ddaf23f4a670f2d404"} Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.574622 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.574636 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66df6b94fb-sw6kf" event={"ID":"35f476fe-d3af-4e73-bb7e-ff6a4919ccf7","Type":"ContainerStarted","Data":"ab893bac46f265af0493408d80bed26438427fa6753e9c0581ae6d30031c089d"} Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.576812 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.721831 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.722103 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" containerID="cri-o://50816bb6c791d0061cb027708547cabdce9ae96816de3a6d22de87d758cdf8fd" gracePeriod=10 Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.766684 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.806796 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:36 crc kubenswrapper[4698]: I1014 10:15:36.806866 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66df6b94fb-sw6kf" podStartSLOduration=2.806840339 podStartE2EDuration="2.806840339s" podCreationTimestamp="2025-10-14 10:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:36.705171247 +0000 UTC m=+1118.402470663" watchObservedRunningTime="2025-10-14 10:15:36.806840339 +0000 UTC m=+1118.504139785" Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.003632 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.168993 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.591431 4698 generic.go:334] "Generic (PLEG): container finished" podID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerID="50816bb6c791d0061cb027708547cabdce9ae96816de3a6d22de87d758cdf8fd" exitCode=0 Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.591999 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="cinder-volume" containerID="cri-o://200ca86c8378d80fc5d18f46220f35a1589a1a5597b5a8d22d1f5d17be13a897" gracePeriod=30 Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.591510 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" event={"ID":"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba","Type":"ContainerDied","Data":"50816bb6c791d0061cb027708547cabdce9ae96816de3a6d22de87d758cdf8fd"} Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.593316 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="cinder-scheduler" containerID="cri-o://d092922ecb801a8a918acb82b0cdf4e3360c2dbaa168f36e409c0b47d66f5378" gracePeriod=30 Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.593468 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="probe" containerID="cri-o://c8ed817c301154ab39e341849c0d3d8e7db3d34360d2b0b52e5a24a735e66584" gracePeriod=30 Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.593685 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="probe" containerID="cri-o://d339aa902ab22c31c1921b5bff6d477591b762bae03bf9901cabae11bd29ee57" gracePeriod=30 Oct 14 10:15:37 crc kubenswrapper[4698]: I1014 10:15:37.651014 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.546739 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cf664b6c9-t6wfc" Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.631186 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.631487 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df4467494-hnvp2" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-api" containerID="cri-o://1cef027ac4a153809efa7b4630e617ad142f7242e6bd7a57ec9bf78b8aa9f9c9" gracePeriod=30 Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.631508 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df4467494-hnvp2" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-httpd" containerID="cri-o://135ee1279718ac6f4b4c9e89f3787f41db1c20d9dd50fe00fff08e50d0bc18e0" gracePeriod=30 Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.644230 4698 generic.go:334] "Generic (PLEG): container finished" podID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerID="d092922ecb801a8a918acb82b0cdf4e3360c2dbaa168f36e409c0b47d66f5378" exitCode=0 Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.644315 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerDied","Data":"d092922ecb801a8a918acb82b0cdf4e3360c2dbaa168f36e409c0b47d66f5378"} Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.670493 4698 generic.go:334] "Generic (PLEG): container finished" podID="d26415c7-42ea-464b-910f-c1b25784fde3" containerID="200ca86c8378d80fc5d18f46220f35a1589a1a5597b5a8d22d1f5d17be13a897" exitCode=0 Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.672079 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerDied","Data":"200ca86c8378d80fc5d18f46220f35a1589a1a5597b5a8d22d1f5d17be13a897"} Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.672325 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="cinder-backup" containerID="cri-o://d97a24ce30967412fecd6ae2a41002bfa33633e4fbb9e84bfdc0a51943c5a293" gracePeriod=30 Oct 14 10:15:38 crc kubenswrapper[4698]: I1014 10:15:38.672430 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="probe" containerID="cri-o://5d5aa297aa05d4045d4baa23201aa34efedc024e6eb6303918a106e11d3fc92a" gracePeriod=30 Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.440554 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58759987c5-vr6vx"] Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.443085 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.445329 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.445611 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.446518 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.456516 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58759987c5-vr6vx"] Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-internal-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543360 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdx6p\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-kube-api-access-qdx6p\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543392 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-config-data\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543423 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-etc-swift\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-log-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543579 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-public-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543641 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-combined-ca-bundle\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.543671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-run-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653377 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-log-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653410 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-public-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653462 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-combined-ca-bundle\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-run-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653561 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-internal-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653601 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdx6p\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-kube-api-access-qdx6p\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653618 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-config-data\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.653645 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-etc-swift\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.657320 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-log-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.658247 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1278dc-c5df-49ed-8c8e-6284281cf240-run-httpd\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.661460 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-etc-swift\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.664964 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-config-data\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.667863 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-public-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.670272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-internal-tls-certs\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.676212 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1278dc-c5df-49ed-8c8e-6284281cf240-combined-ca-bundle\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.678463 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdx6p\" (UniqueName: \"kubernetes.io/projected/3a1278dc-c5df-49ed-8c8e-6284281cf240-kube-api-access-qdx6p\") pod \"swift-proxy-58759987c5-vr6vx\" (UID: \"3a1278dc-c5df-49ed-8c8e-6284281cf240\") " pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.752931 4698 generic.go:334] "Generic (PLEG): container finished" podID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerID="d339aa902ab22c31c1921b5bff6d477591b762bae03bf9901cabae11bd29ee57" exitCode=0 Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.753007 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerDied","Data":"d339aa902ab22c31c1921b5bff6d477591b762bae03bf9901cabae11bd29ee57"} Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.759094 4698 generic.go:334] "Generic (PLEG): container finished" podID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerID="135ee1279718ac6f4b4c9e89f3787f41db1c20d9dd50fe00fff08e50d0bc18e0" exitCode=0 Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.759348 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerDied","Data":"135ee1279718ac6f4b4c9e89f3787f41db1c20d9dd50fe00fff08e50d0bc18e0"} Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.761626 4698 generic.go:334] "Generic (PLEG): container finished" podID="d26415c7-42ea-464b-910f-c1b25784fde3" containerID="c8ed817c301154ab39e341849c0d3d8e7db3d34360d2b0b52e5a24a735e66584" exitCode=0 Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.761673 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerDied","Data":"c8ed817c301154ab39e341849c0d3d8e7db3d34360d2b0b52e5a24a735e66584"} Oct 14 10:15:39 crc kubenswrapper[4698]: I1014 10:15:39.762377 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.059015 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.059490 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.779410 4698 generic.go:334] "Generic (PLEG): container finished" podID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerID="5d5aa297aa05d4045d4baa23201aa34efedc024e6eb6303918a106e11d3fc92a" exitCode=0 Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.779437 4698 generic.go:334] "Generic (PLEG): container finished" podID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerID="d97a24ce30967412fecd6ae2a41002bfa33633e4fbb9e84bfdc0a51943c5a293" exitCode=0 Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.779457 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerDied","Data":"5d5aa297aa05d4045d4baa23201aa34efedc024e6eb6303918a106e11d3fc92a"} Oct 14 10:15:40 crc kubenswrapper[4698]: I1014 10:15:40.779484 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerDied","Data":"d97a24ce30967412fecd6ae2a41002bfa33633e4fbb9e84bfdc0a51943c5a293"} Oct 14 10:15:41 crc kubenswrapper[4698]: I1014 10:15:41.287875 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: connect: connection refused" Oct 14 10:15:43 crc kubenswrapper[4698]: I1014 10:15:43.859850 4698 generic.go:334] "Generic (PLEG): container finished" podID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerID="1cef027ac4a153809efa7b4630e617ad142f7242e6bd7a57ec9bf78b8aa9f9c9" exitCode=0 Oct 14 10:15:43 crc kubenswrapper[4698]: I1014 10:15:43.860193 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerDied","Data":"1cef027ac4a153809efa7b4630e617ad142f7242e6bd7a57ec9bf78b8aa9f9c9"} Oct 14 10:15:44 crc kubenswrapper[4698]: I1014 10:15:44.880986 4698 generic.go:334] "Generic (PLEG): container finished" podID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerID="393471ee803b2f6bdb94dbb502c32fa759670f44814d5f995e9836fa400b1b05" exitCode=137 Oct 14 10:15:44 crc kubenswrapper[4698]: I1014 10:15:44.881578 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerDied","Data":"393471ee803b2f6bdb94dbb502c32fa759670f44814d5f995e9836fa400b1b05"} Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.372186 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.372848 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" containerID="cri-o://fe1ae4f98d73812377657f7d3d462daa5749aeb44eca817da5e339d837c16a39" gracePeriod=30 Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.373000 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-httpd" containerID="cri-o://386102c65388fad9deb6f3de090829e956a0f05bec6707acaabf79a9eb363e43" gracePeriod=30 Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.385441 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.156:9292/healthcheck\": EOF" Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.389715 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-external-api-0" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.156:9292/healthcheck\": EOF" Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.897401 4698 generic.go:334] "Generic (PLEG): container finished" podID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerID="fe1ae4f98d73812377657f7d3d462daa5749aeb44eca817da5e339d837c16a39" exitCode=143 Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.897462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerDied","Data":"fe1ae4f98d73812377657f7d3d462daa5749aeb44eca817da5e339d837c16a39"} Oct 14 10:15:45 crc kubenswrapper[4698]: I1014 10:15:45.974005 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.077999 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.486791 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.578877 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66df6b94fb-sw6kf" Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.649775 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.650254 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api-log" containerID="cri-o://38d577e303af2cd9edc188a6f03459306b89b4bbb79048439a6dce9fa059d51d" gracePeriod=30 Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.650752 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api" containerID="cri-o://98a62b4299ff785f4988afd479c08919e2e745c55cdb25d37f55f7fd1d73e0fc" gracePeriod=30 Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.913293 4698 generic.go:334] "Generic (PLEG): container finished" podID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerID="38d577e303af2cd9edc188a6f03459306b89b4bbb79048439a6dce9fa059d51d" exitCode=143 Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.913758 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="manila-scheduler" containerID="cri-o://ff73b0bec1c9849156835b3e93c6b4d50ee38d4d5f4c9f164b7a79c91d7252eb" gracePeriod=30 Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.914059 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerDied","Data":"38d577e303af2cd9edc188a6f03459306b89b4bbb79048439a6dce9fa059d51d"} Oct 14 10:15:46 crc kubenswrapper[4698]: I1014 10:15:46.914995 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="probe" containerID="cri-o://136185543d9ec97a7a63282756aeac7779ba5da2123cd5f96e04d141c9e827a6" gracePeriod=30 Oct 14 10:15:47 crc kubenswrapper[4698]: I1014 10:15:47.927537 4698 generic.go:334] "Generic (PLEG): container finished" podID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerID="136185543d9ec97a7a63282756aeac7779ba5da2123cd5f96e04d141c9e827a6" exitCode=0 Oct 14 10:15:47 crc kubenswrapper[4698]: I1014 10:15:47.927636 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerDied","Data":"136185543d9ec97a7a63282756aeac7779ba5da2123cd5f96e04d141c9e827a6"} Oct 14 10:15:47 crc kubenswrapper[4698]: I1014 10:15:47.931489 4698 generic.go:334] "Generic (PLEG): container finished" podID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerID="7744bc3ece636640cf932f302f2388ac604aab827b60125c6bf6f774e96d49a8" exitCode=137 Oct 14 10:15:47 crc kubenswrapper[4698]: I1014 10:15:47.931524 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerDied","Data":"7744bc3ece636640cf932f302f2388ac604aab827b60125c6bf6f774e96d49a8"} Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.643661 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.139:3000/\": dial tcp 10.217.0.139:3000: connect: connection refused" Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.828418 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:44724->10.217.0.171:9311: read: connection reset by peer" Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.828467 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:44734->10.217.0.171:9311: read: connection reset by peer" Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.965098 4698 generic.go:334] "Generic (PLEG): container finished" podID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerID="98a62b4299ff785f4988afd479c08919e2e745c55cdb25d37f55f7fd1d73e0fc" exitCode=0 Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.965344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerDied","Data":"98a62b4299ff785f4988afd479c08919e2e745c55cdb25d37f55f7fd1d73e0fc"} Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.973816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" event={"ID":"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba","Type":"ContainerDied","Data":"2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e"} Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.973893 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2468705987feb483bae67e5bb36fb52afe03b8dcc426aff31a791892f71fa81e" Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.980568 4698 generic.go:334] "Generic (PLEG): container finished" podID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerID="386102c65388fad9deb6f3de090829e956a0f05bec6707acaabf79a9eb363e43" exitCode=0 Oct 14 10:15:49 crc kubenswrapper[4698]: I1014 10:15:49.980678 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerDied","Data":"386102c65388fad9deb6f3de090829e956a0f05bec6707acaabf79a9eb363e43"} Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.067073 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b567dfd5d-nvwrp" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.260615 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.261887 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.301362 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.322194 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.432488 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.432575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.436006 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440396 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440433 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440459 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn8pb\" (UniqueName: \"kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440485 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440535 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdhb\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440579 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440623 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440647 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440673 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440722 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440776 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440793 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mplh7\" (UniqueName: \"kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440843 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440868 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440887 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440909 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440942 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440962 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.440981 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441007 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441027 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441066 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441092 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441136 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441164 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441193 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441223 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441263 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441310 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441338 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441354 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441370 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441393 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441439 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbq87\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441493 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441523 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data\") pod \"9dea0f58-0975-4dc1-9459-a72b8151027b\" (UID: \"9dea0f58-0975-4dc1-9459-a72b8151027b\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441539 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys\") pod \"d26415c7-42ea-464b-910f-c1b25784fde3\" (UID: \"d26415c7-42ea-464b-910f-c1b25784fde3\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.441555 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.442516 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder\") pod \"84a7547f-d165-4381-a7f3-8b050ee39fbf\" (UID: \"84a7547f-d165-4381-a7f3-8b050ee39fbf\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.442688 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.442788 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.445458 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.446854 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.446947 4698 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-iscsi\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.446966 4698 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-brick\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.446979 4698 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.449096 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.449203 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph" (OuterVolumeSpecName: "ceph") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.449219 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.449282 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run" (OuterVolumeSpecName: "run") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.449257 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run" (OuterVolumeSpecName: "run") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.450908 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.451178 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.452462 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev" (OuterVolumeSpecName: "dev") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.452479 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.452559 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.452833 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys" (OuterVolumeSpecName: "sys") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.452911 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.453551 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev" (OuterVolumeSpecName: "dev") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.454238 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.454281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.455895 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.455941 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.456044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.457160 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.456827 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys" (OuterVolumeSpecName: "sys") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.482582 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts" (OuterVolumeSpecName: "scripts") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.493906 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb" (OuterVolumeSpecName: "kube-api-access-bvdhb") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "kube-api-access-bvdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.493990 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7" (OuterVolumeSpecName: "kube-api-access-mplh7") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "kube-api-access-mplh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.494008 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87" (OuterVolumeSpecName: "kube-api-access-cbq87") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "kube-api-access-cbq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.494024 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb" (OuterVolumeSpecName: "kube-api-access-pn8pb") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "kube-api-access-pn8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.495963 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts" (OuterVolumeSpecName: "scripts") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.502000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts" (OuterVolumeSpecName: "scripts") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.502275 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph" (OuterVolumeSpecName: "ceph") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551383 4698 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-lib-modules\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551432 4698 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-dev\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551442 4698 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551453 4698 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551467 4698 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-nvme\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551475 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551483 4698 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-lib-modules\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551497 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551512 4698 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-locks-brick\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551521 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551529 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551538 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551550 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9dea0f58-0975-4dc1-9459-a72b8151027b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551558 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551566 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbq87\" (UniqueName: \"kubernetes.io/projected/84a7547f-d165-4381-a7f3-8b050ee39fbf-kube-api-access-cbq87\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551575 4698 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-sys\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551584 4698 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551593 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551602 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551610 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551620 4698 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-etc-nvme\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551630 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn8pb\" (UniqueName: \"kubernetes.io/projected/9dea0f58-0975-4dc1-9459-a72b8151027b-kube-api-access-pn8pb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551639 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdhb\" (UniqueName: \"kubernetes.io/projected/d26415c7-42ea-464b-910f-c1b25784fde3-kube-api-access-bvdhb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551648 4698 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-etc-iscsi\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551658 4698 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d26415c7-42ea-464b-910f-c1b25784fde3-dev\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551666 4698 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84a7547f-d165-4381-a7f3-8b050ee39fbf-sys\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551675 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551684 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mplh7\" (UniqueName: \"kubernetes.io/projected/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-kube-api-access-mplh7\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551693 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.551703 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.565375 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.652505 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.652554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.652625 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2klf5\" (UniqueName: \"kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.654080 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.654168 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.654280 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.654445 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd\") pod \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\" (UID: \"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e\") " Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.662223 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.662933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.678754 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5" (OuterVolumeSpecName: "kube-api-access-2klf5") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "kube-api-access-2klf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.680736 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.717930 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts" (OuterVolumeSpecName: "scripts") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.762197 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.762234 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.762243 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.762254 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.762264 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2klf5\" (UniqueName: \"kubernetes.io/projected/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-kube-api-access-2klf5\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.914166 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.987543 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.987879 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-log" containerID="cri-o://197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16" gracePeriod=30 Oct 14 10:15:50 crc kubenswrapper[4698]: I1014 10:15:50.988737 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-httpd" containerID="cri-o://522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c" gracePeriod=30 Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.008147 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.024818 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.027255 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.061496 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.073696 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle\") pod \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.074181 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.074321 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data\") pod \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.074411 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom\") pod \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.074504 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs\") pod \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.074524 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6p5b\" (UniqueName: \"kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b\") pod \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\" (UID: \"53ced7bd-2ae6-4e55-8ea2-395d6aebf185\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.076156 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.076196 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.077297 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs" (OuterVolumeSpecName: "logs") pod "53ced7bd-2ae6-4e55-8ea2-395d6aebf185" (UID: "53ced7bd-2ae6-4e55-8ea2-395d6aebf185"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.090672 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.109590 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.111507 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53ced7bd-2ae6-4e55-8ea2-395d6aebf185" (UID: "53ced7bd-2ae6-4e55-8ea2-395d6aebf185"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.114187 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.116213 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b" (OuterVolumeSpecName: "kube-api-access-s6p5b") pod "53ced7bd-2ae6-4e55-8ea2-395d6aebf185" (UID: "53ced7bd-2ae6-4e55-8ea2-395d6aebf185"). InnerVolumeSpecName "kube-api-access-s6p5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.123425 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.32160272 podStartE2EDuration="21.123404144s" podCreationTimestamp="2025-10-14 10:15:30 +0000 UTC" firstStartedPulling="2025-10-14 10:15:31.400565526 +0000 UTC m=+1113.097864942" lastFinishedPulling="2025-10-14 10:15:50.20236695 +0000 UTC m=+1131.899666366" observedRunningTime="2025-10-14 10:15:51.114545089 +0000 UTC m=+1132.811844525" watchObservedRunningTime="2025-10-14 10:15:51.123404144 +0000 UTC m=+1132.820703560" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.125823 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.178142 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: W1014 10:15:51.180592 4698 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba/volumes/kubernetes.io~configmap/ovsdbserver-sb Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.180634 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.180833 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") pod \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\" (UID: \"a9c6fefd-814f-4f20-8a30-b76d3b6a43ba\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.184294 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.184324 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.184338 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.184350 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.184364 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6p5b\" (UniqueName: \"kubernetes.io/projected/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-kube-api-access-s6p5b\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.198059 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.211740 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config" (OuterVolumeSpecName: "config") pod "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" (UID: "a9c6fefd-814f-4f20-8a30-b76d3b6a43ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.217682 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.254985 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.264016 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.283214 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53ced7bd-2ae6-4e55-8ea2-395d6aebf185" (UID: "53ced7bd-2ae6-4e55-8ea2-395d6aebf185"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287109 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287153 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287168 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287177 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287185 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.287194 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.288484 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-wwxgc" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: i/o timeout" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.303710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data" (OuterVolumeSpecName: "config-data") pod "53ced7bd-2ae6-4e55-8ea2-395d6aebf185" (UID: "53ced7bd-2ae6-4e55-8ea2-395d6aebf185"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.310325 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data" (OuterVolumeSpecName: "config-data") pod "84a7547f-d165-4381-a7f3-8b050ee39fbf" (UID: "84a7547f-d165-4381-a7f3-8b050ee39fbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.338019 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data" (OuterVolumeSpecName: "config-data") pod "9dea0f58-0975-4dc1-9459-a72b8151027b" (UID: "9dea0f58-0975-4dc1-9459-a72b8151027b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.386396 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data" (OuterVolumeSpecName: "config-data") pod "d26415c7-42ea-464b-910f-c1b25784fde3" (UID: "d26415c7-42ea-464b-910f-c1b25784fde3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.389421 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26415c7-42ea-464b-910f-c1b25784fde3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.389455 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a7547f-d165-4381-a7f3-8b050ee39fbf-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.389468 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dea0f58-0975-4dc1-9459-a72b8151027b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.389477 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ced7bd-2ae6-4e55-8ea2-395d6aebf185-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.398388 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.444961 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data" (OuterVolumeSpecName: "config-data") pod "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" (UID: "9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc5fb5d9d-rbb5x" event={"ID":"53ced7bd-2ae6-4e55-8ea2-395d6aebf185","Type":"ContainerDied","Data":"ff44068c979e004de88a9cfe9626f6babdcd033d033f521bc691bd7321f6fa5b"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473261 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58759987c5-vr6vx"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473282 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d26415c7-42ea-464b-910f-c1b25784fde3","Type":"ContainerDied","Data":"ad403fbe26ee7b1bd03e68d72a3379c94d31e0ccf050049e7a86ce38a41c55da"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9dea0f58-0975-4dc1-9459-a72b8151027b","Type":"ContainerDied","Data":"476b9f790d47d03be8b3aba5a67d6293750822c644e745713ac61a5656f4de47"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473310 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e","Type":"ContainerDied","Data":"7eed8b9c87eb272ad1caec3b1e6dfa86d1ca855d22850ed9a873c86851ce357b"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9b9ad197-b532-42c9-8ac2-c822cca96a52","Type":"ContainerStarted","Data":"388f425d7f65b26ac080f489c55f37db8b8af06bf7ef9680c9f0002c7ff2a935"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df4467494-hnvp2" event={"ID":"a64a5c90-1d3c-47da-9d3c-1ca749c00bad","Type":"ContainerDied","Data":"0a7b5a4abf79bb93718884b24053d9f3fae69e57b70eb7e96042bc93a19c4c77"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473343 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b567dfd5d-nvwrp" event={"ID":"ee140165-8d8d-426c-b33f-5803bb0a7ad1","Type":"ContainerDied","Data":"901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473356 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901781c92196a0d233259efb4996688607e9e48dc8deda94312e848ee2b7b961" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473365 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84a7547f-d165-4381-a7f3-8b050ee39fbf","Type":"ContainerDied","Data":"392bf8bde2dc15f64c730ffdfa79b77546512f2191b07f8a74ed03b168b51716"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473376 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"339c0475-be6d-48a1-af88-8c3f55eaf50a","Type":"ContainerDied","Data":"f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3"} Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473387 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82096f2afa7fa447fd3eea1f2424dcd37f28912333e488151a58dd0b0343cc3" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.473975 4698 scope.go:117] "RemoveContainer" containerID="98a62b4299ff785f4988afd479c08919e2e745c55cdb25d37f55f7fd1d73e0fc" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.491621 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle\") pod \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.491700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2gnz\" (UniqueName: \"kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz\") pod \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.491821 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config\") pod \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.491913 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config\") pod \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.491978 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs\") pod \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\" (UID: \"a64a5c90-1d3c-47da-9d3c-1ca749c00bad\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.492411 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.512650 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz" (OuterVolumeSpecName: "kube-api-access-x2gnz") pod "a64a5c90-1d3c-47da-9d3c-1ca749c00bad" (UID: "a64a5c90-1d3c-47da-9d3c-1ca749c00bad"). InnerVolumeSpecName "kube-api-access-x2gnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.516821 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a64a5c90-1d3c-47da-9d3c-1ca749c00bad" (UID: "a64a5c90-1d3c-47da-9d3c-1ca749c00bad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.522298 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.532251 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.585975 4698 scope.go:117] "RemoveContainer" containerID="38d577e303af2cd9edc188a6f03459306b89b4bbb79048439a6dce9fa059d51d" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593538 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593586 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593660 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593681 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593714 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593741 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593834 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593860 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593936 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.593957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594056 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594073 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594176 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7f6s\" (UniqueName: \"kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594232 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2m55\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594274 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle\") pod \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\" (UID: \"ee140165-8d8d-426c-b33f-5803bb0a7ad1\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.594318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"339c0475-be6d-48a1-af88-8c3f55eaf50a\" (UID: \"339c0475-be6d-48a1-af88-8c3f55eaf50a\") " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.595256 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.595272 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2gnz\" (UniqueName: \"kubernetes.io/projected/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-kube-api-access-x2gnz\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.605076 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.622543 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.629645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs" (OuterVolumeSpecName: "logs") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.630033 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs" (OuterVolumeSpecName: "logs") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.650932 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts" (OuterVolumeSpecName: "scripts") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.651727 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.658110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.687787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph" (OuterVolumeSpecName: "ceph") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.687865 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s" (OuterVolumeSpecName: "kube-api-access-t7f6s") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "kube-api-access-t7f6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.687884 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cc5fb5d9d-rbb5x"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704532 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee140165-8d8d-426c-b33f-5803bb0a7ad1-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704570 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704578 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704587 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704595 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7f6s\" (UniqueName: \"kubernetes.io/projected/ee140165-8d8d-426c-b33f-5803bb0a7ad1-kube-api-access-t7f6s\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704618 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704629 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/339c0475-be6d-48a1-af88-8c3f55eaf50a-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.704638 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.705289 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.721916 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.737167 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55" (OuterVolumeSpecName: "kube-api-access-z2m55") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "kube-api-access-z2m55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.761408 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.761890 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.761907 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.761925 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.761952 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api-log" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.761996 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="cinder-volume" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762003 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="cinder-volume" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762014 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762021 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762032 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762056 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762068 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762074 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762082 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762089 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon-log" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762096 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762102 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762137 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-notification-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762144 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-notification-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762156 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-api" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762163 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-api" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762184 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="cinder-backup" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762190 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="cinder-backup" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762203 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762209 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762221 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762227 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762237 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-central-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762244 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-central-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762257 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="cinder-scheduler" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762265 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="cinder-scheduler" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762278 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762285 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762292 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="proxy-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762300 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="proxy-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762310 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762318 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" Oct 14 10:15:51 crc kubenswrapper[4698]: E1014 10:15:51.762331 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="init" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762338 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="init" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762550 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762561 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-notification-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762609 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762622 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-api" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762634 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="cinder-scheduler" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762645 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762661 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" containerName="neutron-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762671 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762684 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="ceilometer-central-agent" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762695 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" containerName="cinder-volume" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762711 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" containerName="barbican-api" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762724 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" containerName="glance-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762737 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon-log" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762751 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" containerName="horizon" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762782 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="cinder-backup" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762794 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" containerName="dnsmasq-dns" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762803 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" containerName="proxy-httpd" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.762808 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" containerName="probe" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.763939 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.764924 4698 scope.go:117] "RemoveContainer" containerID="c8ed817c301154ab39e341849c0d3d8e7db3d34360d2b0b52e5a24a735e66584" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.779473 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.795238 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.806937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808023 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mtf\" (UniqueName: \"kubernetes.io/projected/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-kube-api-access-h2mtf\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808219 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808326 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.808534 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2m55\" (UniqueName: \"kubernetes.io/projected/339c0475-be6d-48a1-af88-8c3f55eaf50a-kube-api-access-z2m55\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.836826 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.841996 4698 scope.go:117] "RemoveContainer" containerID="200ca86c8378d80fc5d18f46220f35a1589a1a5597b5a8d22d1f5d17be13a897" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.878599 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.890682 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914038 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mtf\" (UniqueName: \"kubernetes.io/projected/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-kube-api-access-h2mtf\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914421 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914488 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.914667 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.920989 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.923007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.926110 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.927395 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.929650 4698 scope.go:117] "RemoveContainer" containerID="d339aa902ab22c31c1921b5bff6d477591b762bae03bf9901cabae11bd29ee57" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.935284 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.935745 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.937992 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.942452 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config" (OuterVolumeSpecName: "config") pod "a64a5c90-1d3c-47da-9d3c-1ca749c00bad" (UID: "a64a5c90-1d3c-47da-9d3c-1ca749c00bad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.945948 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.946890 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mtf\" (UniqueName: \"kubernetes.io/projected/ef857e49-6a95-4e1c-a170-a9b7cf5b095f-kube-api-access-h2mtf\") pod \"cinder-scheduler-0\" (UID: \"ef857e49-6a95-4e1c-a170-a9b7cf5b095f\") " pod="openstack/cinder-scheduler-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.947592 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.951807 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.963262 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.974686 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wwxgc"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.979971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64a5c90-1d3c-47da-9d3c-1ca749c00bad" (UID: "a64a5c90-1d3c-47da-9d3c-1ca749c00bad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.984473 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:51 crc kubenswrapper[4698]: I1014 10:15:51.993225 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.019364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data" (OuterVolumeSpecName: "config-data") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.031804 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-lib-modules\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.031890 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.031938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.031987 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032021 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-dev\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032066 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-scripts\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032168 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-sys\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-run\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94cs2\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-kube-api-access-94cs2\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032419 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-ceph\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032492 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032533 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.032643 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.033103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.033226 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.033418 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.033445 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.033464 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.051250 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.054087 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.058064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts" (OuterVolumeSpecName: "scripts") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.065120 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.088498 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.095333 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.099798 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.118415 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.140020 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.141662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.141722 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.141933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-sys\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143145 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-run\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143215 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-lib-modules\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143250 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143313 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143379 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143438 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-dev\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143471 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-scripts\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143511 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-dev\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143610 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-sys\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143608 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143638 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143674 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-lib-modules\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143687 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143710 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.143920 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-sys\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.144542 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-dev\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.144672 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.144857 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.145130 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data" (OuterVolumeSpecName: "config-data") pod "339c0475-be6d-48a1-af88-8c3f55eaf50a" (UID: "339c0475-be6d-48a1-af88-8c3f55eaf50a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.145737 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.145806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-run\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.145834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94cs2\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-kube-api-access-94cs2\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.145984 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-ceph\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146001 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146018 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zcp\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-kube-api-access-p8zcp\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146074 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146093 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146230 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146289 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146438 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146516 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.146976 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-run\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.147164 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.147775 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.148479 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a64a5c90-1d3c-47da-9d3c-1ca749c00bad" (UID: "a64a5c90-1d3c-47da-9d3c-1ca749c00bad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.149238 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a03e3bf1-857d-4f91-ad0e-254605774e3c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.149642 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-scripts\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.150011 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.150055 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.150071 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/339c0475-be6d-48a1-af88-8c3f55eaf50a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.150084 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee140165-8d8d-426c-b33f-5803bb0a7ad1-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.150096 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.152832 4698 generic.go:334] "Generic (PLEG): container finished" podID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerID="197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16" exitCode=143 Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.152905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerDied","Data":"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16"} Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.154338 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.154383 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.155105 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"78a024b7-16f4-4177-8b52-0cecbc173247","Type":"ContainerStarted","Data":"66bce914d869ddaa57d26379cf91284e306ec17ce0bef0832ae45b926969b1c6"} Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.157554 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-ceph\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.158347 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerStarted","Data":"2c91f8b61ae8fb6ad29050152b1146da8c33b19aa59b735ae1551b69b4849ec7"} Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.161244 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.163972 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03e3bf1-857d-4f91-ad0e-254605774e3c-config-data\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.170697 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ee140165-8d8d-426c-b33f-5803bb0a7ad1" (UID: "ee140165-8d8d-426c-b33f-5803bb0a7ad1"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.171292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94cs2\" (UniqueName: \"kubernetes.io/projected/a03e3bf1-857d-4f91-ad0e-254605774e3c-kube-api-access-94cs2\") pod \"cinder-backup-0\" (UID: \"a03e3bf1-857d-4f91-ad0e-254605774e3c\") " pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.177414 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.180700 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58759987c5-vr6vx" event={"ID":"3a1278dc-c5df-49ed-8c8e-6284281cf240","Type":"ContainerStarted","Data":"c3952ca21d901b365c3ba251f3aa151ded1c149877e9c0aa0458f527076ca834"} Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.180741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58759987c5-vr6vx" event={"ID":"3a1278dc-c5df-49ed-8c8e-6284281cf240","Type":"ContainerStarted","Data":"ace5baca20de528324f354ea58c71196952b8084ca992af85198adf035dddf4e"} Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.180819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.181502 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.181530 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df4467494-hnvp2" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.182453 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.183389 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.183748 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b567dfd5d-nvwrp" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.223807 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.253976 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254025 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254064 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254098 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-sys\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-run\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254173 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-dev\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254213 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254229 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254247 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmc7\" (UniqueName: \"kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254306 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254341 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254359 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254375 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zcp\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-kube-api-access-p8zcp\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254477 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254512 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254533 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254567 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254596 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254614 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254690 4698 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee140165-8d8d-426c-b33f-5803bb0a7ad1-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254702 4698 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64a5c90-1d3c-47da-9d3c-1ca749c00bad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254796 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.254851 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.260273 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.260430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.260974 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.260998 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-sys\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.261016 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-run\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.261077 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-dev\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.262145 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.262341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.277250 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.278266 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.278701 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.279894 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.300477 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.309740 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zcp\" (UniqueName: \"kubernetes.io/projected/07b37a90-cc29-48f1-9da0-d2b0a9fc6d85-kube-api-access-p8zcp\") pod \"cinder-volume-volume1-0\" (UID: \"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85\") " pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.330308 4698 scope.go:117] "RemoveContainer" containerID="d092922ecb801a8a918acb82b0cdf4e3360c2dbaa168f36e409c0b47d66f5378" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.359953 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.363213 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmc7\" (UniqueName: \"kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.363371 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.366723 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.366876 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.366960 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.367060 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.367102 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.394273 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b567dfd5d-nvwrp"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.401481 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmc7\" (UniqueName: \"kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.402143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.402910 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.406112 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.406477 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.406540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.407708 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.408497 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.421535 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts\") pod \"ceilometer-0\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.428150 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.429982 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.438447 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.441459 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.446185 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.447478 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.453904 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.463676 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.474086 4698 scope.go:117] "RemoveContainer" containerID="7744bc3ece636640cf932f302f2388ac604aab827b60125c6bf6f774e96d49a8" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.474264 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.502700 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-df4467494-hnvp2"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.571967 4698 scope.go:117] "RemoveContainer" containerID="1afee696e504730f0049af836091b4f76382bfbb64f7e1b9e1e6d76a7978bdc7" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577694 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577782 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577807 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-logs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577861 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hbh\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-kube-api-access-v7hbh\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.577919 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.578070 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-ceph\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.578103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.578160 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680217 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-ceph\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680284 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680322 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680349 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680384 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680401 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-logs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680443 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hbh\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-kube-api-access-v7hbh\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.680480 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.682245 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.682686 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a46350f-38b2-4150-aef2-6c2a336a22f9-logs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.683583 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.683622 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.685838 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.686944 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.687414 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-ceph\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.690951 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.692962 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a46350f-38b2-4150-aef2-6c2a336a22f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.722846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hbh\" (UniqueName: \"kubernetes.io/projected/6a46350f-38b2-4150-aef2-6c2a336a22f9-kube-api-access-v7hbh\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.723741 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6a46350f-38b2-4150-aef2-6c2a336a22f9\") " pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.746398 4698 scope.go:117] "RemoveContainer" containerID="5d738c6fa016f4c382ae548edd0c96d7f3832727c5cba761ed5948bcbcebfdc2" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.807827 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 10:15:52 crc kubenswrapper[4698]: I1014 10:15:52.879053 4698 scope.go:117] "RemoveContainer" containerID="135ee1279718ac6f4b4c9e89f3787f41db1c20d9dd50fe00fff08e50d0bc18e0" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.040553 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339c0475-be6d-48a1-af88-8c3f55eaf50a" path="/var/lib/kubelet/pods/339c0475-be6d-48a1-af88-8c3f55eaf50a/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.041941 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ced7bd-2ae6-4e55-8ea2-395d6aebf185" path="/var/lib/kubelet/pods/53ced7bd-2ae6-4e55-8ea2-395d6aebf185/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.044205 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a7547f-d165-4381-a7f3-8b050ee39fbf" path="/var/lib/kubelet/pods/84a7547f-d165-4381-a7f3-8b050ee39fbf/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.046718 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e" path="/var/lib/kubelet/pods/9d9ba4f9-bf43-4076-ab3d-9ca34bcbed4e/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.047426 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dea0f58-0975-4dc1-9459-a72b8151027b" path="/var/lib/kubelet/pods/9dea0f58-0975-4dc1-9459-a72b8151027b/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.053352 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64a5c90-1d3c-47da-9d3c-1ca749c00bad" path="/var/lib/kubelet/pods/a64a5c90-1d3c-47da-9d3c-1ca749c00bad/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.056560 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c6fefd-814f-4f20-8a30-b76d3b6a43ba" path="/var/lib/kubelet/pods/a9c6fefd-814f-4f20-8a30-b76d3b6a43ba/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.057782 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26415c7-42ea-464b-910f-c1b25784fde3" path="/var/lib/kubelet/pods/d26415c7-42ea-464b-910f-c1b25784fde3/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.058634 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee140165-8d8d-426c-b33f-5803bb0a7ad1" path="/var/lib/kubelet/pods/ee140165-8d8d-426c-b33f-5803bb0a7ad1/volumes" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.122021 4698 scope.go:117] "RemoveContainer" containerID="1cef027ac4a153809efa7b4630e617ad142f7242e6bd7a57ec9bf78b8aa9f9c9" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.153047 4698 scope.go:117] "RemoveContainer" containerID="5d5aa297aa05d4045d4baa23201aa34efedc024e6eb6303918a106e11d3fc92a" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.205156 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.205523 4698 generic.go:334] "Generic (PLEG): container finished" podID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerID="ff73b0bec1c9849156835b3e93c6b4d50ee38d4d5f4c9f164b7a79c91d7252eb" exitCode=0 Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.205606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerDied","Data":"ff73b0bec1c9849156835b3e93c6b4d50ee38d4d5f4c9f164b7a79c91d7252eb"} Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.239531 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58759987c5-vr6vx" event={"ID":"3a1278dc-c5df-49ed-8c8e-6284281cf240","Type":"ContainerStarted","Data":"d0786b640f4cdb93e1fad872e9d3469b3f9255a04f117afe08833f74188b34f1"} Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.239624 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.239649 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.246879 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef857e49-6a95-4e1c-a170-a9b7cf5b095f","Type":"ContainerStarted","Data":"d182c35b1fff52fee0ac62a5bd4a438af777846604b97ed581eb523d0bfdbd06"} Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.287753 4698 scope.go:117] "RemoveContainer" containerID="d97a24ce30967412fecd6ae2a41002bfa33633e4fbb9e84bfdc0a51943c5a293" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.291356 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58759987c5-vr6vx" podStartSLOduration=14.291340008 podStartE2EDuration="14.291340008s" podCreationTimestamp="2025-10-14 10:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:53.255740605 +0000 UTC m=+1134.953040031" watchObservedRunningTime="2025-10-14 10:15:53.291340008 +0000 UTC m=+1134.988639424" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.307737 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerStarted","Data":"315a8c2799b9e235fc1ebfb1b5cc09ae89a07755913d57d50dd8a060545e915a"} Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.355738 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=5.386110937 podStartE2EDuration="31.355710298s" podCreationTimestamp="2025-10-14 10:15:22 +0000 UTC" firstStartedPulling="2025-10-14 10:15:24.042615664 +0000 UTC m=+1105.739915080" lastFinishedPulling="2025-10-14 10:15:50.012215005 +0000 UTC m=+1131.709514441" observedRunningTime="2025-10-14 10:15:53.337059002 +0000 UTC m=+1135.034358418" watchObservedRunningTime="2025-10-14 10:15:53.355710298 +0000 UTC m=+1135.053009714" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.473847 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.486187 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.496977 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.612984 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.613288 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.613312 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqzjg\" (UniqueName: \"kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.613333 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.613424 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.613452 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id\") pod \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\" (UID: \"e28cf5cd-644d-4f5f-8db7-421fbe745ac2\") " Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.614000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.624628 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg" (OuterVolumeSpecName: "kube-api-access-bqzjg") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "kube-api-access-bqzjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.624807 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts" (OuterVolumeSpecName: "scripts") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.626947 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.674679 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.716169 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqzjg\" (UniqueName: \"kubernetes.io/projected/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-kube-api-access-bqzjg\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.716202 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.716212 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.716220 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.785247 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:53 crc kubenswrapper[4698]: I1014 10:15:53.817977 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.095932 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data" (OuterVolumeSpecName: "config-data") pod "e28cf5cd-644d-4f5f-8db7-421fbe745ac2" (UID: "e28cf5cd-644d-4f5f-8db7-421fbe745ac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.123359 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28cf5cd-644d-4f5f-8db7-421fbe745ac2-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.369405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85","Type":"ContainerStarted","Data":"212675754bbe367f289c4ddc1c7bd320e8d8d20ba75940ffe6c7137a1184aea1"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.371077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85","Type":"ContainerStarted","Data":"324e743a340f2eff66958b385655b6818d39896f97a745b8de42cc2708d49904"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.409598 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"78a024b7-16f4-4177-8b52-0cecbc173247","Type":"ContainerStarted","Data":"72b107855aff984eb7628f134964a487286f069ee5a9f781d926e80d1513691f"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.411335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.441550 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef857e49-6a95-4e1c-a170-a9b7cf5b095f","Type":"ContainerStarted","Data":"5cfeb10b246ff859114eb068f3052f32e9ed49c8d388b7200723f38cbb1c0157"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.453650 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=20.453629996 podStartE2EDuration="20.453629996s" podCreationTimestamp="2025-10-14 10:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:54.43534251 +0000 UTC m=+1136.132641926" watchObservedRunningTime="2025-10-14 10:15:54.453629996 +0000 UTC m=+1136.150929402" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.477637 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a03e3bf1-857d-4f91-ad0e-254605774e3c","Type":"ContainerStarted","Data":"a7cb9c90eb34eb1dc90c9a5b5e3c59e510d1e71327011816313055c1f836987a"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.477692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a03e3bf1-857d-4f91-ad0e-254605774e3c","Type":"ContainerStarted","Data":"01d201ca9ad3b277285b235772eb3c96d3e1920bff38b668162ba2ee9fa80658"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.487183 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e28cf5cd-644d-4f5f-8db7-421fbe745ac2","Type":"ContainerDied","Data":"bbfbe01a6fd5441148ce523b0b9fc96e1458a7a66fd30a0e5e924c56fa020b6e"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.487252 4698 scope.go:117] "RemoveContainer" containerID="136185543d9ec97a7a63282756aeac7779ba5da2123cd5f96e04d141c9e827a6" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.487431 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.503755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a46350f-38b2-4150-aef2-6c2a336a22f9","Type":"ContainerStarted","Data":"ea560e70919bc0951bf844658bdd376dcd989ed715b0b64db9a7303dac1bfe61"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.508607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerStarted","Data":"f8f7c6d9c9973697f08ad9fb591856fe9fe6b159fe23429da6f2968656a3b289"} Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.537975 4698 scope.go:117] "RemoveContainer" containerID="ff73b0bec1c9849156835b3e93c6b4d50ee38d4d5f4c9f164b7a79c91d7252eb" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.550311 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.578303 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.599835 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:54 crc kubenswrapper[4698]: E1014 10:15:54.600393 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="manila-scheduler" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.600407 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="manila-scheduler" Oct 14 10:15:54 crc kubenswrapper[4698]: E1014 10:15:54.600434 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="probe" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.600440 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="probe" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.600678 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="manila-scheduler" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.600699 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" containerName="probe" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.601974 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.607340 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.608189 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639641 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-scripts\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639691 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639722 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b654944c-c016-4506-8ee0-2b23eeafcaca-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.639754 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglc4\" (UniqueName: \"kubernetes.io/projected/b654944c-c016-4506-8ee0-2b23eeafcaca-kube-api-access-bglc4\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741256 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglc4\" (UniqueName: \"kubernetes.io/projected/b654944c-c016-4506-8ee0-2b23eeafcaca-kube-api-access-bglc4\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741419 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-scripts\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741516 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741544 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b654944c-c016-4506-8ee0-2b23eeafcaca-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.741630 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b654944c-c016-4506-8ee0-2b23eeafcaca-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.752333 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.752505 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.752794 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-scripts\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.757641 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b654944c-c016-4506-8ee0-2b23eeafcaca-config-data\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.761259 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglc4\" (UniqueName: \"kubernetes.io/projected/b654944c-c016-4506-8ee0-2b23eeafcaca-kube-api-access-bglc4\") pod \"manila-scheduler-0\" (UID: \"b654944c-c016-4506-8ee0-2b23eeafcaca\") " pod="openstack/manila-scheduler-0" Oct 14 10:15:54 crc kubenswrapper[4698]: I1014 10:15:54.959253 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.060134 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28cf5cd-644d-4f5f-8db7-421fbe745ac2" path="/var/lib/kubelet/pods/e28cf5cd-644d-4f5f-8db7-421fbe745ac2/volumes" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.311044 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358106 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7frzb\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358275 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358343 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358453 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358604 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358656 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358690 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358742 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.358811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run\") pod \"5ac7519d-b7d5-428c-9b04-b507987f26b0\" (UID: \"5ac7519d-b7d5-428c-9b04-b507987f26b0\") " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.360244 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.366154 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs" (OuterVolumeSpecName: "logs") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.370447 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph" (OuterVolumeSpecName: "ceph") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.379933 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb" (OuterVolumeSpecName: "kube-api-access-7frzb") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "kube-api-access-7frzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.382024 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.392959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts" (OuterVolumeSpecName: "scripts") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461393 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461429 4698 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461443 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7frzb\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-kube-api-access-7frzb\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461457 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ac7519d-b7d5-428c-9b04-b507987f26b0-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461470 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5ac7519d-b7d5-428c-9b04-b507987f26b0-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.461506 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.466635 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.492543 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.492965 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data" (OuterVolumeSpecName: "config-data") pod "5ac7519d-b7d5-428c-9b04-b507987f26b0" (UID: "5ac7519d-b7d5-428c-9b04-b507987f26b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.500812 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.504189 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 14 10:15:55 crc kubenswrapper[4698]: W1014 10:15:55.518911 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb654944c_c016_4506_8ee0_2b23eeafcaca.slice/crio-d27ad78fcadf08866645ba2d674706f4878d0837718dda9613a2ee65e0829154 WatchSource:0}: Error finding container d27ad78fcadf08866645ba2d674706f4878d0837718dda9613a2ee65e0829154: Status 404 returned error can't find the container with id d27ad78fcadf08866645ba2d674706f4878d0837718dda9613a2ee65e0829154 Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.544241 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerStarted","Data":"45b6d06f83f5fc461ad2c021f83caad75ec89ea6d60308ea20119be9e6e87f7f"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.549539 4698 generic.go:334] "Generic (PLEG): container finished" podID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerID="522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c" exitCode=0 Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.549613 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerDied","Data":"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.549640 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ac7519d-b7d5-428c-9b04-b507987f26b0","Type":"ContainerDied","Data":"933fc9083ede68c66aae0877f6b77c000e45be70b927bf91c3c8a452afb50155"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.549659 4698 scope.go:117] "RemoveContainer" containerID="522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.549742 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.555738 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"07b37a90-cc29-48f1-9da0-d2b0a9fc6d85","Type":"ContainerStarted","Data":"f669637c2a1d06cdfc97665295b66cad5822a5ba8e7e97ab03113dc3079e2106"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.566070 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a03e3bf1-857d-4f91-ad0e-254605774e3c","Type":"ContainerStarted","Data":"71ad12602765bebf4a0d1e97a8696a20c7783564e46a5865eff96ea13535d7fd"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.574610 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.574635 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.574665 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.574674 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac7519d-b7d5-428c-9b04-b507987f26b0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.606186 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a46350f-38b2-4150-aef2-6c2a336a22f9","Type":"ContainerStarted","Data":"52d14d55ac17d2fefe5487708a7dd67be1cdd9c3da67d26ce8114fd1a1adc994"} Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.616381 4698 scope.go:117] "RemoveContainer" containerID="197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.620726 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.620706721 podStartE2EDuration="4.620706721s" podCreationTimestamp="2025-10-14 10:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:55.590786091 +0000 UTC m=+1137.288085517" watchObservedRunningTime="2025-10-14 10:15:55.620706721 +0000 UTC m=+1137.318006137" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.629786 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.629754292 podStartE2EDuration="4.629754292s" podCreationTimestamp="2025-10-14 10:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:55.615376678 +0000 UTC m=+1137.312676104" watchObservedRunningTime="2025-10-14 10:15:55.629754292 +0000 UTC m=+1137.327053708" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.669279 4698 scope.go:117] "RemoveContainer" containerID="522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c" Oct 14 10:15:55 crc kubenswrapper[4698]: E1014 10:15:55.671803 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c\": container with ID starting with 522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c not found: ID does not exist" containerID="522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.671841 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c"} err="failed to get container status \"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c\": rpc error: code = NotFound desc = could not find container \"522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c\": container with ID starting with 522d707122dbb2fd6b11a022f50d6516464ce5ef5721a0e5154d38e70c41fe5c not found: ID does not exist" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.671881 4698 scope.go:117] "RemoveContainer" containerID="197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16" Oct 14 10:15:55 crc kubenswrapper[4698]: E1014 10:15:55.674347 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16\": container with ID starting with 197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16 not found: ID does not exist" containerID="197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.674397 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16"} err="failed to get container status \"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16\": rpc error: code = NotFound desc = could not find container \"197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16\": container with ID starting with 197a37b83b760b3c1bc4bd3b84cb3209d4404582091523121089ecab6d4a1d16 not found: ID does not exist" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.690863 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.705742 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.718239 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:55 crc kubenswrapper[4698]: E1014 10:15:55.719333 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-httpd" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.719449 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-httpd" Oct 14 10:15:55 crc kubenswrapper[4698]: E1014 10:15:55.719530 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-log" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.719695 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-log" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.720578 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-log" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.720693 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" containerName="glance-httpd" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.732199 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.736940 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.737440 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.741602 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801320 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801569 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28k7\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-kube-api-access-q28k7\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801732 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.801992 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.802016 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.802233 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.802437 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.904964 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905007 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905059 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28k7\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-kube-api-access-q28k7\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905081 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905141 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905172 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905214 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905295 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.905723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.906648 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72ae1c-cd0b-42d9-b438-c80428436dd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.907340 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.916436 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.920174 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.938288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.938881 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72ae1c-cd0b-42d9-b438-c80428436dd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.938900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.940479 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28k7\" (UniqueName: \"kubernetes.io/projected/5d72ae1c-cd0b-42d9-b438-c80428436dd3-kube-api-access-q28k7\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.970839 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d72ae1c-cd0b-42d9-b438-c80428436dd3\") " pod="openstack/glance-default-internal-api-0" Oct 14 10:15:55 crc kubenswrapper[4698]: I1014 10:15:55.997129 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.081410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.662729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef857e49-6a95-4e1c-a170-a9b7cf5b095f","Type":"ContainerStarted","Data":"85b6fdba4ea3c4dfc9b9edb0a8bc67abde41d3a8f2b5bd40f448b4ffad22bcf1"} Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.694446 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.694420594 podStartE2EDuration="5.694420594s" podCreationTimestamp="2025-10-14 10:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:56.689600385 +0000 UTC m=+1138.386899801" watchObservedRunningTime="2025-10-14 10:15:56.694420594 +0000 UTC m=+1138.391720030" Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.706349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b654944c-c016-4506-8ee0-2b23eeafcaca","Type":"ContainerStarted","Data":"eb8bf59ee725559bd76556f23f607b51c84ced77ac59b52cca9db4f306fb6bf3"} Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.706410 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b654944c-c016-4506-8ee0-2b23eeafcaca","Type":"ContainerStarted","Data":"d27ad78fcadf08866645ba2d674706f4878d0837718dda9613a2ee65e0829154"} Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.719370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a46350f-38b2-4150-aef2-6c2a336a22f9","Type":"ContainerStarted","Data":"8b0e096b87f5473020cd587e72b95a5601daa134ea9fb94f38b201923fd6b7c7"} Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.783938 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.783909126 podStartE2EDuration="4.783909126s" podCreationTimestamp="2025-10-14 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:56.771344725 +0000 UTC m=+1138.468644151" watchObservedRunningTime="2025-10-14 10:15:56.783909126 +0000 UTC m=+1138.481208552" Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.834011 4698 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0f82463f-d211-4e22-8742-570c0293871c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0f82463f-d211-4e22-8742-570c0293871c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0f82463f_d211_4e22_8742_570c0293871c.slice" Oct 14 10:15:56 crc kubenswrapper[4698]: E1014 10:15:56.834415 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0f82463f-d211-4e22-8742-570c0293871c] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0f82463f-d211-4e22-8742-570c0293871c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0f82463f_d211_4e22_8742_570c0293871c.slice" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" podUID="0f82463f-d211-4e22-8742-570c0293871c" Oct 14 10:15:56 crc kubenswrapper[4698]: I1014 10:15:56.973350 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.071607 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac7519d-b7d5-428c-9b04-b507987f26b0" path="/var/lib/kubelet/pods/5ac7519d-b7d5-428c-9b04-b507987f26b0/volumes" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.101057 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.407972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.428981 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.755022 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerStarted","Data":"99cbcb5f515575ea03913c612458652256ed2daedaccf0c471f30ed1976f0147"} Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.755455 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerStarted","Data":"0889fa9d3cd1d8b9c3e019368297b4ee4abb8517b9e93f6c12649b80301ed67f"} Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.757141 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b654944c-c016-4506-8ee0-2b23eeafcaca","Type":"ContainerStarted","Data":"4ca4b661600efe70a2a63f077afc3333580aa08a0743482488698cad16164368"} Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.759693 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d72ae1c-cd0b-42d9-b438-c80428436dd3","Type":"ContainerStarted","Data":"a67b9de1ce3019c968eaed6570a4ade554c69da82bce0e179a74d8c390600aed"} Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.759749 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6b4865f-29kvv" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.787071 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.787049069 podStartE2EDuration="3.787049069s" podCreationTimestamp="2025-10-14 10:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:57.777498714 +0000 UTC m=+1139.474798140" watchObservedRunningTime="2025-10-14 10:15:57.787049069 +0000 UTC m=+1139.484348485" Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.849873 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:57 crc kubenswrapper[4698]: I1014 10:15:57.908107 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc6b4865f-29kvv"] Oct 14 10:15:58 crc kubenswrapper[4698]: I1014 10:15:58.769822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d72ae1c-cd0b-42d9-b438-c80428436dd3","Type":"ContainerStarted","Data":"1ee7ec566fa0d28ac89b073311db8de9b88af23ccec4d920fd4af283eaf30b6b"} Oct 14 10:15:58 crc kubenswrapper[4698]: I1014 10:15:58.770164 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d72ae1c-cd0b-42d9-b438-c80428436dd3","Type":"ContainerStarted","Data":"a6c4b2e6c94faca514e621bb1a0eec761e5bfdc45270e45b35ac2cb217008f04"} Oct 14 10:15:58 crc kubenswrapper[4698]: I1014 10:15:58.795393 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.795372382 podStartE2EDuration="3.795372382s" podCreationTimestamp="2025-10-14 10:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:15:58.794446665 +0000 UTC m=+1140.491746091" watchObservedRunningTime="2025-10-14 10:15:58.795372382 +0000 UTC m=+1140.492671798" Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.037780 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f82463f-d211-4e22-8742-570c0293871c" path="/var/lib/kubelet/pods/0f82463f-d211-4e22-8742-570c0293871c/volumes" Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.775912 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.776990 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58759987c5-vr6vx" Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.792208 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerStarted","Data":"0cc5323bc512981ac58e7b40e976b197214c69101a7c4dc8812b8415adc805f5"} Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.792267 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:15:59 crc kubenswrapper[4698]: I1014 10:15:59.850012 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.316843594 podStartE2EDuration="8.849961125s" podCreationTimestamp="2025-10-14 10:15:51 +0000 UTC" firstStartedPulling="2025-10-14 10:15:53.514518452 +0000 UTC m=+1135.211817898" lastFinishedPulling="2025-10-14 10:15:59.047636013 +0000 UTC m=+1140.744935429" observedRunningTime="2025-10-14 10:15:59.8473667 +0000 UTC m=+1141.544666116" watchObservedRunningTime="2025-10-14 10:15:59.849961125 +0000 UTC m=+1141.547260541" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.112471 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.162696 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.163050 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-central-agent" containerID="cri-o://45b6d06f83f5fc461ad2c021f83caad75ec89ea6d60308ea20119be9e6e87f7f" gracePeriod=30 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.163108 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="sg-core" containerID="cri-o://99cbcb5f515575ea03913c612458652256ed2daedaccf0c471f30ed1976f0147" gracePeriod=30 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.163156 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="proxy-httpd" containerID="cri-o://0cc5323bc512981ac58e7b40e976b197214c69101a7c4dc8812b8415adc805f5" gracePeriod=30 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.163208 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-notification-agent" containerID="cri-o://0889fa9d3cd1d8b9c3e019368297b4ee4abb8517b9e93f6c12649b80301ed67f" gracePeriod=30 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.407495 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.708489 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.757140 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.810382 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.811453 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849129 4698 generic.go:334] "Generic (PLEG): container finished" podID="572abc7f-83af-4b84-9704-a63381f34c96" containerID="0cc5323bc512981ac58e7b40e976b197214c69101a7c4dc8812b8415adc805f5" exitCode=0 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849160 4698 generic.go:334] "Generic (PLEG): container finished" podID="572abc7f-83af-4b84-9704-a63381f34c96" containerID="99cbcb5f515575ea03913c612458652256ed2daedaccf0c471f30ed1976f0147" exitCode=2 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849169 4698 generic.go:334] "Generic (PLEG): container finished" podID="572abc7f-83af-4b84-9704-a63381f34c96" containerID="0889fa9d3cd1d8b9c3e019368297b4ee4abb8517b9e93f6c12649b80301ed67f" exitCode=0 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849177 4698 generic.go:334] "Generic (PLEG): container finished" podID="572abc7f-83af-4b84-9704-a63381f34c96" containerID="45b6d06f83f5fc461ad2c021f83caad75ec89ea6d60308ea20119be9e6e87f7f" exitCode=0 Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849223 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerDied","Data":"0cc5323bc512981ac58e7b40e976b197214c69101a7c4dc8812b8415adc805f5"} Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849255 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerDied","Data":"99cbcb5f515575ea03913c612458652256ed2daedaccf0c471f30ed1976f0147"} Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849268 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerDied","Data":"0889fa9d3cd1d8b9c3e019368297b4ee4abb8517b9e93f6c12649b80301ed67f"} Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.849278 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerDied","Data":"45b6d06f83f5fc461ad2c021f83caad75ec89ea6d60308ea20119be9e6e87f7f"} Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.866942 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 10:16:02 crc kubenswrapper[4698]: I1014 10:16:02.874379 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.035102 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.222420 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.223865 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.223975 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224015 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224048 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmc7\" (UniqueName: \"kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224191 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224231 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224246 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data\") pod \"572abc7f-83af-4b84-9704-a63381f34c96\" (UID: \"572abc7f-83af-4b84-9704-a63381f34c96\") " Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224474 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.224980 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.225377 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.225398 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/572abc7f-83af-4b84-9704-a63381f34c96-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.231519 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts" (OuterVolumeSpecName: "scripts") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.233993 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7" (OuterVolumeSpecName: "kube-api-access-djmc7") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "kube-api-access-djmc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.257305 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.325398 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.327850 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.327883 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmc7\" (UniqueName: \"kubernetes.io/projected/572abc7f-83af-4b84-9704-a63381f34c96-kube-api-access-djmc7\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.327898 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.327910 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.364458 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data" (OuterVolumeSpecName: "config-data") pod "572abc7f-83af-4b84-9704-a63381f34c96" (UID: "572abc7f-83af-4b84-9704-a63381f34c96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.430267 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/572abc7f-83af-4b84-9704-a63381f34c96-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.861303 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"572abc7f-83af-4b84-9704-a63381f34c96","Type":"ContainerDied","Data":"f8f7c6d9c9973697f08ad9fb591856fe9fe6b159fe23429da6f2968656a3b289"} Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.861388 4698 scope.go:117] "RemoveContainer" containerID="0cc5323bc512981ac58e7b40e976b197214c69101a7c4dc8812b8415adc805f5" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.861334 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.861891 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.862119 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.886970 4698 scope.go:117] "RemoveContainer" containerID="99cbcb5f515575ea03913c612458652256ed2daedaccf0c471f30ed1976f0147" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.911507 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.916041 4698 scope.go:117] "RemoveContainer" containerID="0889fa9d3cd1d8b9c3e019368297b4ee4abb8517b9e93f6c12649b80301ed67f" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.920196 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.945927 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:03 crc kubenswrapper[4698]: E1014 10:16:03.946562 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="sg-core" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946590 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="sg-core" Oct 14 10:16:03 crc kubenswrapper[4698]: E1014 10:16:03.946617 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-central-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946626 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-central-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: E1014 10:16:03.946664 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="proxy-httpd" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946674 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="proxy-httpd" Oct 14 10:16:03 crc kubenswrapper[4698]: E1014 10:16:03.946684 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-notification-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946692 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-notification-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946951 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="proxy-httpd" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.946985 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-central-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.947000 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="sg-core" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.947010 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="572abc7f-83af-4b84-9704-a63381f34c96" containerName="ceilometer-notification-agent" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.949205 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.952328 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.952421 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.973863 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:03 crc kubenswrapper[4698]: I1014 10:16:03.981912 4698 scope.go:117] "RemoveContainer" containerID="45b6d06f83f5fc461ad2c021f83caad75ec89ea6d60308ea20119be9e6e87f7f" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.153835 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.153918 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.154005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.154040 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.154067 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrtb\" (UniqueName: \"kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.154103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.154132 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256171 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256199 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btrtb\" (UniqueName: \"kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256231 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256262 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256290 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.256329 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.258230 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.258365 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.297058 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.297725 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.303221 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.303360 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.314513 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btrtb\" (UniqueName: \"kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb\") pod \"ceilometer-0\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.587883 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.604581 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:04 crc kubenswrapper[4698]: I1014 10:16:04.960565 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.026921 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572abc7f-83af-4b84-9704-a63381f34c96" path="/var/lib/kubelet/pods/572abc7f-83af-4b84-9704-a63381f34c96/volumes" Oct 14 10:16:05 crc kubenswrapper[4698]: W1014 10:16:05.167051 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0c0309d_223a_4cf9_85e3_837ee78bd32a.slice/crio-0f70ff452f039b2ed72f1e186d22ee6978ac3109b28c0e49f6c9471e8eb68794 WatchSource:0}: Error finding container 0f70ff452f039b2ed72f1e186d22ee6978ac3109b28c0e49f6c9471e8eb68794: Status 404 returned error can't find the container with id 0f70ff452f039b2ed72f1e186d22ee6978ac3109b28c0e49f6c9471e8eb68794 Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.175297 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.550461 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.635319 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.885833 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="manila-share" containerID="cri-o://2c91f8b61ae8fb6ad29050152b1146da8c33b19aa59b735ae1551b69b4849ec7" gracePeriod=30 Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.886067 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerStarted","Data":"0f70ff452f039b2ed72f1e186d22ee6978ac3109b28c0e49f6c9471e8eb68794"} Oct 14 10:16:05 crc kubenswrapper[4698]: I1014 10:16:05.886232 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="probe" containerID="cri-o://315a8c2799b9e235fc1ebfb1b5cc09ae89a07755913d57d50dd8a060545e915a" gracePeriod=30 Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.083012 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.083063 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.130521 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.160666 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.558229 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.558525 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.563592 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.923425 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerID="315a8c2799b9e235fc1ebfb1b5cc09ae89a07755913d57d50dd8a060545e915a" exitCode=0 Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.923751 4698 generic.go:334] "Generic (PLEG): container finished" podID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerID="2c91f8b61ae8fb6ad29050152b1146da8c33b19aa59b735ae1551b69b4849ec7" exitCode=1 Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.923495 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerDied","Data":"315a8c2799b9e235fc1ebfb1b5cc09ae89a07755913d57d50dd8a060545e915a"} Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.923896 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerDied","Data":"2c91f8b61ae8fb6ad29050152b1146da8c33b19aa59b735ae1551b69b4849ec7"} Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.929767 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerStarted","Data":"2312d763653e8eead2e2b582edea00f5d5846d3c457e87be6d3a7acc8d275d77"} Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.929845 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerStarted","Data":"7dff834e84fd8bba9fa2d567584a358d2de241f9cfd5fa6498fd9f61b38b7d0d"} Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.930837 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:06 crc kubenswrapper[4698]: I1014 10:16:06.930861 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.119680 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.270702 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271215 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271253 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271574 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2d8p\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271641 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271755 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts\") pod \"e1230245-6b92-4e01-bc07-043a24a9edd3\" (UID: \"e1230245-6b92-4e01-bc07-043a24a9edd3\") " Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.271938 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.272333 4698 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-var-lib-manila\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.273430 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.293041 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.293215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph" (OuterVolumeSpecName: "ceph") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.297257 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p" (OuterVolumeSpecName: "kube-api-access-b2d8p") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "kube-api-access-b2d8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.299976 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts" (OuterVolumeSpecName: "scripts") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.327169 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.374995 4698 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.375140 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2d8p\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-kube-api-access-b2d8p\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.375230 4698 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1230245-6b92-4e01-bc07-043a24a9edd3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.375313 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.375418 4698 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1230245-6b92-4e01-bc07-043a24a9edd3-ceph\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.375510 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.401944 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data" (OuterVolumeSpecName: "config-data") pod "e1230245-6b92-4e01-bc07-043a24a9edd3" (UID: "e1230245-6b92-4e01-bc07-043a24a9edd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.477887 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1230245-6b92-4e01-bc07-043a24a9edd3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.940244 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e1230245-6b92-4e01-bc07-043a24a9edd3","Type":"ContainerDied","Data":"972f1ca10b061eb6f2a1c3058fb5cc87bf89f33441aa3146ea7635439502a171"} Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.940308 4698 scope.go:117] "RemoveContainer" containerID="315a8c2799b9e235fc1ebfb1b5cc09ae89a07755913d57d50dd8a060545e915a" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.940343 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.948226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerStarted","Data":"0347f305f1905c4c7e3c8ac244bffb66f15f524953c6083fc4b910c412cfdbd6"} Oct 14 10:16:07 crc kubenswrapper[4698]: I1014 10:16:07.963688 4698 scope.go:117] "RemoveContainer" containerID="2c91f8b61ae8fb6ad29050152b1146da8c33b19aa59b735ae1551b69b4849ec7" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.006850 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.013330 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.025647 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:08 crc kubenswrapper[4698]: E1014 10:16:08.031249 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="manila-share" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.031280 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="manila-share" Oct 14 10:16:08 crc kubenswrapper[4698]: E1014 10:16:08.031327 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="probe" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.031333 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="probe" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.031523 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="manila-share" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.031538 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" containerName="probe" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.032589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.041388 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.049845 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191017 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191088 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191109 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191158 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-scripts\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191210 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxfh\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-kube-api-access-rfxfh\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191228 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-ceph\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.191264 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292565 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxfh\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-kube-api-access-rfxfh\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292615 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-ceph\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292660 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292737 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292772 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292802 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292849 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.292863 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-scripts\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.293255 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.293553 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.299105 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-ceph\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.300001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.302180 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-scripts\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.302373 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.303288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.322698 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxfh\" (UniqueName: \"kubernetes.io/projected/ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce-kube-api-access-rfxfh\") pod \"manila-share-share1-0\" (UID: \"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce\") " pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.357064 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.958519 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 10:16:08 crc kubenswrapper[4698]: I1014 10:16:08.958940 4698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.028475 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1230245-6b92-4e01-bc07-043a24a9edd3" path="/var/lib/kubelet/pods/e1230245-6b92-4e01-bc07-043a24a9edd3/volumes" Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.126148 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.776195 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.782970 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.980975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce","Type":"ContainerStarted","Data":"3f9b0cbdba0ed62642a7f332f444a173b7881d64d178da86e48096140be21ebb"} Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.981021 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce","Type":"ContainerStarted","Data":"60d8331bd0913342b2dfaa156987ed7046393a49c9a462ed9cb368db58ab54da"} Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.995997 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-central-agent" containerID="cri-o://2312d763653e8eead2e2b582edea00f5d5846d3c457e87be6d3a7acc8d275d77" gracePeriod=30 Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.996557 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="proxy-httpd" containerID="cri-o://13f4c9a760cdb6f348511bfe6dbeb5c0807ba6584f7e735594f94b38602d52cc" gracePeriod=30 Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.996640 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="sg-core" containerID="cri-o://0347f305f1905c4c7e3c8ac244bffb66f15f524953c6083fc4b910c412cfdbd6" gracePeriod=30 Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.997008 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-notification-agent" containerID="cri-o://7dff834e84fd8bba9fa2d567584a358d2de241f9cfd5fa6498fd9f61b38b7d0d" gracePeriod=30 Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.997395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerStarted","Data":"13f4c9a760cdb6f348511bfe6dbeb5c0807ba6584f7e735594f94b38602d52cc"} Oct 14 10:16:09 crc kubenswrapper[4698]: I1014 10:16:09.997450 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:16:10 crc kubenswrapper[4698]: I1014 10:16:10.031797 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.501049009 podStartE2EDuration="7.031762743s" podCreationTimestamp="2025-10-14 10:16:03 +0000 UTC" firstStartedPulling="2025-10-14 10:16:05.169422153 +0000 UTC m=+1146.866721569" lastFinishedPulling="2025-10-14 10:16:08.700135887 +0000 UTC m=+1150.397435303" observedRunningTime="2025-10-14 10:16:10.02333232 +0000 UTC m=+1151.720631736" watchObservedRunningTime="2025-10-14 10:16:10.031762743 +0000 UTC m=+1151.729062159" Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.006924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce","Type":"ContainerStarted","Data":"06b8e91b8b7ad4aa5d2b8d0e87ff7fafd7c98a6fbe9d45c893bb11e2bd0f7b68"} Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011075 4698 generic.go:334] "Generic (PLEG): container finished" podID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerID="13f4c9a760cdb6f348511bfe6dbeb5c0807ba6584f7e735594f94b38602d52cc" exitCode=0 Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011099 4698 generic.go:334] "Generic (PLEG): container finished" podID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerID="0347f305f1905c4c7e3c8ac244bffb66f15f524953c6083fc4b910c412cfdbd6" exitCode=2 Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011107 4698 generic.go:334] "Generic (PLEG): container finished" podID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerID="7dff834e84fd8bba9fa2d567584a358d2de241f9cfd5fa6498fd9f61b38b7d0d" exitCode=0 Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011602 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerDied","Data":"13f4c9a760cdb6f348511bfe6dbeb5c0807ba6584f7e735594f94b38602d52cc"} Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011629 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerDied","Data":"0347f305f1905c4c7e3c8ac244bffb66f15f524953c6083fc4b910c412cfdbd6"} Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.011639 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerDied","Data":"7dff834e84fd8bba9fa2d567584a358d2de241f9cfd5fa6498fd9f61b38b7d0d"} Oct 14 10:16:11 crc kubenswrapper[4698]: I1014 10:16:11.038623 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.038605743 podStartE2EDuration="3.038605743s" podCreationTimestamp="2025-10-14 10:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:16:11.034189606 +0000 UTC m=+1152.731489022" watchObservedRunningTime="2025-10-14 10:16:11.038605743 +0000 UTC m=+1152.735905159" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.038938 4698 generic.go:334] "Generic (PLEG): container finished" podID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerID="2312d763653e8eead2e2b582edea00f5d5846d3c457e87be6d3a7acc8d275d77" exitCode=0 Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.039043 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerDied","Data":"2312d763653e8eead2e2b582edea00f5d5846d3c457e87be6d3a7acc8d275d77"} Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.220252 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7v7zn"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.223431 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.258157 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7v7zn"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.323762 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv84j\" (UniqueName: \"kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j\") pod \"nova-api-db-create-7v7zn\" (UID: \"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e\") " pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.406496 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kfnck"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.407962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.415877 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kfnck"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.425923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv84j\" (UniqueName: \"kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j\") pod \"nova-api-db-create-7v7zn\" (UID: \"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e\") " pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.445467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv84j\" (UniqueName: \"kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j\") pod \"nova-api-db-create-7v7zn\" (UID: \"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e\") " pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.504988 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-crcb9"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.506327 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.526057 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.527598 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmt7\" (UniqueName: \"kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7\") pod \"nova-cell0-db-create-kfnck\" (UID: \"874aab62-ca3a-45a9-9e34-5527a0c2ee80\") " pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.530474 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-crcb9"] Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.559768 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629479 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629664 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btrtb\" (UniqueName: \"kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629769 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629829 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629896 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629969 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.629992 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml\") pod \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\" (UID: \"e0c0309d-223a-4cf9-85e3-837ee78bd32a\") " Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.630298 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvdh\" (UniqueName: \"kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh\") pod \"nova-cell1-db-create-crcb9\" (UID: \"67152ffa-66bb-42a2-b1f9-1e350372431b\") " pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.630497 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmt7\" (UniqueName: \"kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7\") pod \"nova-cell0-db-create-kfnck\" (UID: \"874aab62-ca3a-45a9-9e34-5527a0c2ee80\") " pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.630594 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.631903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.635204 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts" (OuterVolumeSpecName: "scripts") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.639042 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb" (OuterVolumeSpecName: "kube-api-access-btrtb") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "kube-api-access-btrtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.647141 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmt7\" (UniqueName: \"kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7\") pod \"nova-cell0-db-create-kfnck\" (UID: \"874aab62-ca3a-45a9-9e34-5527a0c2ee80\") " pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.661189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733029 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvdh\" (UniqueName: \"kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh\") pod \"nova-cell1-db-create-crcb9\" (UID: \"67152ffa-66bb-42a2-b1f9-1e350372431b\") " pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733128 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btrtb\" (UniqueName: \"kubernetes.io/projected/e0c0309d-223a-4cf9-85e3-837ee78bd32a-kube-api-access-btrtb\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733144 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733160 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733173 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0c0309d-223a-4cf9-85e3-837ee78bd32a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.733184 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.738226 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.755546 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvdh\" (UniqueName: \"kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh\") pod \"nova-cell1-db-create-crcb9\" (UID: \"67152ffa-66bb-42a2-b1f9-1e350372431b\") " pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.764215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data" (OuterVolumeSpecName: "config-data") pod "e0c0309d-223a-4cf9-85e3-837ee78bd32a" (UID: "e0c0309d-223a-4cf9-85e3-837ee78bd32a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.820962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.835146 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.835194 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c0309d-223a-4cf9-85e3-837ee78bd32a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:13 crc kubenswrapper[4698]: I1014 10:16:13.837174 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.014335 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7v7zn"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.079530 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0c0309d-223a-4cf9-85e3-837ee78bd32a","Type":"ContainerDied","Data":"0f70ff452f039b2ed72f1e186d22ee6978ac3109b28c0e49f6c9471e8eb68794"} Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.079593 4698 scope.go:117] "RemoveContainer" containerID="13f4c9a760cdb6f348511bfe6dbeb5c0807ba6584f7e735594f94b38602d52cc" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.079837 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.180837 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.187938 4698 scope.go:117] "RemoveContainer" containerID="0347f305f1905c4c7e3c8ac244bffb66f15f524953c6083fc4b910c412cfdbd6" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.194784 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.231270 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:14 crc kubenswrapper[4698]: E1014 10:16:14.231826 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-notification-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.231847 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-notification-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: E1014 10:16:14.231864 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="proxy-httpd" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.231869 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="proxy-httpd" Oct 14 10:16:14 crc kubenswrapper[4698]: E1014 10:16:14.231881 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="sg-core" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.231887 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="sg-core" Oct 14 10:16:14 crc kubenswrapper[4698]: E1014 10:16:14.231910 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-central-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.231917 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-central-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.232144 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="proxy-httpd" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.232163 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="sg-core" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.232176 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-central-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.232185 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" containerName="ceilometer-notification-agent" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.234297 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.240655 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.240940 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.242915 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.252373 4698 scope.go:117] "RemoveContainer" containerID="7dff834e84fd8bba9fa2d567584a358d2de241f9cfd5fa6498fd9f61b38b7d0d" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.340181 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kfnck"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.341037 4698 scope.go:117] "RemoveContainer" containerID="2312d763653e8eead2e2b582edea00f5d5846d3c457e87be6d3a7acc8d275d77" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347450 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347506 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347534 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347552 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwl4\" (UniqueName: \"kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347601 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347628 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.347647 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449147 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449551 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449680 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449727 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449750 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.449793 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hwl4\" (UniqueName: \"kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.451263 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.453087 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.456540 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.456587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.456701 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.460409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.472935 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hwl4\" (UniqueName: \"kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4\") pod \"ceilometer-0\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " pod="openstack/ceilometer-0" Oct 14 10:16:14 crc kubenswrapper[4698]: W1014 10:16:14.476896 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67152ffa_66bb_42a2_b1f9_1e350372431b.slice/crio-f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89 WatchSource:0}: Error finding container f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89: Status 404 returned error can't find the container with id f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89 Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.478551 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-crcb9"] Oct 14 10:16:14 crc kubenswrapper[4698]: I1014 10:16:14.594287 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.038738 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c0309d-223a-4cf9-85e3-837ee78bd32a" path="/var/lib/kubelet/pods/e0c0309d-223a-4cf9-85e3-837ee78bd32a/volumes" Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.094271 4698 generic.go:334] "Generic (PLEG): container finished" podID="67152ffa-66bb-42a2-b1f9-1e350372431b" containerID="a8ce9e0c5b06287b9c791ea486587d18fb2bf613b1875753a8b2b8eb22a93538" exitCode=0 Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.094375 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-crcb9" event={"ID":"67152ffa-66bb-42a2-b1f9-1e350372431b","Type":"ContainerDied","Data":"a8ce9e0c5b06287b9c791ea486587d18fb2bf613b1875753a8b2b8eb22a93538"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.094405 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-crcb9" event={"ID":"67152ffa-66bb-42a2-b1f9-1e350372431b","Type":"ContainerStarted","Data":"f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.095645 4698 generic.go:334] "Generic (PLEG): container finished" podID="92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" containerID="154ad7e03ab5e052963bdd224a7429e79f33c4338f9986ef44c43612e30c5334" exitCode=0 Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.095707 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7v7zn" event={"ID":"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e","Type":"ContainerDied","Data":"154ad7e03ab5e052963bdd224a7429e79f33c4338f9986ef44c43612e30c5334"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.095735 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7v7zn" event={"ID":"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e","Type":"ContainerStarted","Data":"87c5e7dd55271a31e6d68eb3383a4e3bb3166a0aeb3675d8b3a3e653084e723e"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.097002 4698 generic.go:334] "Generic (PLEG): container finished" podID="874aab62-ca3a-45a9-9e34-5527a0c2ee80" containerID="2985d6c6d98ec7a0d58b99ee1eb3c7ee8b4e401f34d6694992bbc5fdb7d331f0" exitCode=0 Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.097053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfnck" event={"ID":"874aab62-ca3a-45a9-9e34-5527a0c2ee80","Type":"ContainerDied","Data":"2985d6c6d98ec7a0d58b99ee1eb3c7ee8b4e401f34d6694992bbc5fdb7d331f0"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.097081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfnck" event={"ID":"874aab62-ca3a-45a9-9e34-5527a0c2ee80","Type":"ContainerStarted","Data":"60a4630276b360a8ddf2845aef628aa7eefd393f7f9baa813b1394fd077a0d65"} Oct 14 10:16:15 crc kubenswrapper[4698]: I1014 10:16:15.224441 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.110897 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerStarted","Data":"a35dfcfab1c2d9f50b0b8c89bdb25dabe478f454e53a95bdfec779f7e3e42032"} Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.538920 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.596972 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.607193 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jvdh\" (UniqueName: \"kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh\") pod \"67152ffa-66bb-42a2-b1f9-1e350372431b\" (UID: \"67152ffa-66bb-42a2-b1f9-1e350372431b\") " Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.615959 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh" (OuterVolumeSpecName: "kube-api-access-8jvdh") pod "67152ffa-66bb-42a2-b1f9-1e350372431b" (UID: "67152ffa-66bb-42a2-b1f9-1e350372431b"). InnerVolumeSpecName "kube-api-access-8jvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.710604 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jvdh\" (UniqueName: \"kubernetes.io/projected/67152ffa-66bb-42a2-b1f9-1e350372431b-kube-api-access-8jvdh\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.730888 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.736026 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.811818 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv84j\" (UniqueName: \"kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j\") pod \"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e\" (UID: \"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e\") " Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.812099 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljmt7\" (UniqueName: \"kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7\") pod \"874aab62-ca3a-45a9-9e34-5527a0c2ee80\" (UID: \"874aab62-ca3a-45a9-9e34-5527a0c2ee80\") " Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.815003 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j" (OuterVolumeSpecName: "kube-api-access-rv84j") pod "92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" (UID: "92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e"). InnerVolumeSpecName "kube-api-access-rv84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.816344 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7" (OuterVolumeSpecName: "kube-api-access-ljmt7") pod "874aab62-ca3a-45a9-9e34-5527a0c2ee80" (UID: "874aab62-ca3a-45a9-9e34-5527a0c2ee80"). InnerVolumeSpecName "kube-api-access-ljmt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.915265 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljmt7\" (UniqueName: \"kubernetes.io/projected/874aab62-ca3a-45a9-9e34-5527a0c2ee80-kube-api-access-ljmt7\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:16 crc kubenswrapper[4698]: I1014 10:16:16.915309 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv84j\" (UniqueName: \"kubernetes.io/projected/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e-kube-api-access-rv84j\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.123826 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-crcb9" event={"ID":"67152ffa-66bb-42a2-b1f9-1e350372431b","Type":"ContainerDied","Data":"f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89"} Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.124138 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f395526d9c135ee2dbeb914cfe9747129a1ba1d86d7c1bde0ba196e2d0901d89" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.124047 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-crcb9" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.128048 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerStarted","Data":"10f76ba44b7503b470f0c691fc3da7c5f8beed8d8b9b0b361e0603dbe3a14cab"} Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.128595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerStarted","Data":"0e267cacdf9602d8239f4e1ecd038b2224edc23fdcc78e6bbbdc755711521c74"} Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.130100 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7v7zn" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.130958 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7v7zn" event={"ID":"92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e","Type":"ContainerDied","Data":"87c5e7dd55271a31e6d68eb3383a4e3bb3166a0aeb3675d8b3a3e653084e723e"} Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.130991 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c5e7dd55271a31e6d68eb3383a4e3bb3166a0aeb3675d8b3a3e653084e723e" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.132560 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfnck" event={"ID":"874aab62-ca3a-45a9-9e34-5527a0c2ee80","Type":"ContainerDied","Data":"60a4630276b360a8ddf2845aef628aa7eefd393f7f9baa813b1394fd077a0d65"} Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.132581 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a4630276b360a8ddf2845aef628aa7eefd393f7f9baa813b1394fd077a0d65" Oct 14 10:16:17 crc kubenswrapper[4698]: I1014 10:16:17.132637 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfnck" Oct 14 10:16:18 crc kubenswrapper[4698]: I1014 10:16:18.159075 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerStarted","Data":"5201744bd85834481bd6ab5201d741a43b482e61f53b6f265b0c15fc0f92837e"} Oct 14 10:16:18 crc kubenswrapper[4698]: I1014 10:16:18.357894 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 14 10:16:19 crc kubenswrapper[4698]: I1014 10:16:19.186729 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerStarted","Data":"009c61c99a39530c758cb7c57451abdf561a5943d56e84516f92deea1b7afc90"} Oct 14 10:16:19 crc kubenswrapper[4698]: I1014 10:16:19.187249 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:16:19 crc kubenswrapper[4698]: I1014 10:16:19.208983 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.587678617 podStartE2EDuration="5.208967205s" podCreationTimestamp="2025-10-14 10:16:14 +0000 UTC" firstStartedPulling="2025-10-14 10:16:15.217810826 +0000 UTC m=+1156.915110242" lastFinishedPulling="2025-10-14 10:16:18.839099414 +0000 UTC m=+1160.536398830" observedRunningTime="2025-10-14 10:16:19.208212234 +0000 UTC m=+1160.905511650" watchObservedRunningTime="2025-10-14 10:16:19.208967205 +0000 UTC m=+1160.906266621" Oct 14 10:16:21 crc kubenswrapper[4698]: I1014 10:16:21.696644 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:21 crc kubenswrapper[4698]: I1014 10:16:21.697267 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-central-agent" containerID="cri-o://0e267cacdf9602d8239f4e1ecd038b2224edc23fdcc78e6bbbdc755711521c74" gracePeriod=30 Oct 14 10:16:21 crc kubenswrapper[4698]: I1014 10:16:21.697358 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="sg-core" containerID="cri-o://5201744bd85834481bd6ab5201d741a43b482e61f53b6f265b0c15fc0f92837e" gracePeriod=30 Oct 14 10:16:21 crc kubenswrapper[4698]: I1014 10:16:21.697441 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-notification-agent" containerID="cri-o://10f76ba44b7503b470f0c691fc3da7c5f8beed8d8b9b0b361e0603dbe3a14cab" gracePeriod=30 Oct 14 10:16:21 crc kubenswrapper[4698]: I1014 10:16:21.697997 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="proxy-httpd" containerID="cri-o://009c61c99a39530c758cb7c57451abdf561a5943d56e84516f92deea1b7afc90" gracePeriod=30 Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.230810 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1ea6d75-caa9-42db-a423-330363435900" containerID="009c61c99a39530c758cb7c57451abdf561a5943d56e84516f92deea1b7afc90" exitCode=0 Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.231470 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1ea6d75-caa9-42db-a423-330363435900" containerID="5201744bd85834481bd6ab5201d741a43b482e61f53b6f265b0c15fc0f92837e" exitCode=2 Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.231581 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1ea6d75-caa9-42db-a423-330363435900" containerID="10f76ba44b7503b470f0c691fc3da7c5f8beed8d8b9b0b361e0603dbe3a14cab" exitCode=0 Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.230875 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerDied","Data":"009c61c99a39530c758cb7c57451abdf561a5943d56e84516f92deea1b7afc90"} Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.231746 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerDied","Data":"5201744bd85834481bd6ab5201d741a43b482e61f53b6f265b0c15fc0f92837e"} Oct 14 10:16:22 crc kubenswrapper[4698]: I1014 10:16:22.231845 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerDied","Data":"10f76ba44b7503b470f0c691fc3da7c5f8beed8d8b9b0b361e0603dbe3a14cab"} Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.455576 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-11fc-account-create-fk5wc"] Oct 14 10:16:23 crc kubenswrapper[4698]: E1014 10:16:23.456380 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874aab62-ca3a-45a9-9e34-5527a0c2ee80" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456398 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="874aab62-ca3a-45a9-9e34-5527a0c2ee80" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: E1014 10:16:23.456435 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67152ffa-66bb-42a2-b1f9-1e350372431b" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456442 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="67152ffa-66bb-42a2-b1f9-1e350372431b" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: E1014 10:16:23.456483 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456492 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456668 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="67152ffa-66bb-42a2-b1f9-1e350372431b" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456686 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="874aab62-ca3a-45a9-9e34-5527a0c2ee80" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.456722 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" containerName="mariadb-database-create" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.457434 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.459855 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.467138 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-11fc-account-create-fk5wc"] Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.576329 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tjw4\" (UniqueName: \"kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4\") pod \"nova-api-11fc-account-create-fk5wc\" (UID: \"9adb550b-8d23-4ab3-b120-7640a29e36a5\") " pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.658571 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4c9f-account-create-7bztt"] Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.660314 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.667837 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.678132 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tjw4\" (UniqueName: \"kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4\") pod \"nova-api-11fc-account-create-fk5wc\" (UID: \"9adb550b-8d23-4ab3-b120-7640a29e36a5\") " pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.686927 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4c9f-account-create-7bztt"] Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.726202 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tjw4\" (UniqueName: \"kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4\") pod \"nova-api-11fc-account-create-fk5wc\" (UID: \"9adb550b-8d23-4ab3-b120-7640a29e36a5\") " pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.779728 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.780679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4td9\" (UniqueName: \"kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9\") pod \"nova-cell0-4c9f-account-create-7bztt\" (UID: \"8246a862-8262-4666-a47d-02815d416c74\") " pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.867898 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4705-account-create-6pcpq"] Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.869544 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.871843 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.876139 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4705-account-create-6pcpq"] Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.883861 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4td9\" (UniqueName: \"kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9\") pod \"nova-cell0-4c9f-account-create-7bztt\" (UID: \"8246a862-8262-4666-a47d-02815d416c74\") " pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.901173 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4td9\" (UniqueName: \"kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9\") pod \"nova-cell0-4c9f-account-create-7bztt\" (UID: \"8246a862-8262-4666-a47d-02815d416c74\") " pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.981403 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:23 crc kubenswrapper[4698]: I1014 10:16:23.985526 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2b2\" (UniqueName: \"kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2\") pod \"nova-cell1-4705-account-create-6pcpq\" (UID: \"d60a1dd8-3277-4ce0-8c70-706171496794\") " pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.087243 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2b2\" (UniqueName: \"kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2\") pod \"nova-cell1-4705-account-create-6pcpq\" (UID: \"d60a1dd8-3277-4ce0-8c70-706171496794\") " pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.111627 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2b2\" (UniqueName: \"kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2\") pod \"nova-cell1-4705-account-create-6pcpq\" (UID: \"d60a1dd8-3277-4ce0-8c70-706171496794\") " pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.310607 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.327992 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-11fc-account-create-fk5wc"] Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.468948 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4c9f-account-create-7bztt"] Oct 14 10:16:24 crc kubenswrapper[4698]: I1014 10:16:24.757328 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4705-account-create-6pcpq"] Oct 14 10:16:24 crc kubenswrapper[4698]: W1014 10:16:24.763573 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd60a1dd8_3277_4ce0_8c70_706171496794.slice/crio-cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22 WatchSource:0}: Error finding container cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22: Status 404 returned error can't find the container with id cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22 Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.279717 4698 generic.go:334] "Generic (PLEG): container finished" podID="d60a1dd8-3277-4ce0-8c70-706171496794" containerID="f7fa8866ebf19f71ff0cd89e6833d8a263efa5ee516dfa3dbc8377962bc320c1" exitCode=0 Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.279881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4705-account-create-6pcpq" event={"ID":"d60a1dd8-3277-4ce0-8c70-706171496794","Type":"ContainerDied","Data":"f7fa8866ebf19f71ff0cd89e6833d8a263efa5ee516dfa3dbc8377962bc320c1"} Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.279982 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4705-account-create-6pcpq" event={"ID":"d60a1dd8-3277-4ce0-8c70-706171496794","Type":"ContainerStarted","Data":"cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22"} Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.282301 4698 generic.go:334] "Generic (PLEG): container finished" podID="9adb550b-8d23-4ab3-b120-7640a29e36a5" containerID="d1372aed996a809cb62d5d55afbb796697fddd14ec5345d0ec5231e3afbb8435" exitCode=0 Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.282392 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11fc-account-create-fk5wc" event={"ID":"9adb550b-8d23-4ab3-b120-7640a29e36a5","Type":"ContainerDied","Data":"d1372aed996a809cb62d5d55afbb796697fddd14ec5345d0ec5231e3afbb8435"} Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.282427 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11fc-account-create-fk5wc" event={"ID":"9adb550b-8d23-4ab3-b120-7640a29e36a5","Type":"ContainerStarted","Data":"86103aedcfbc225548bf06e158ad5e045e230a07ca01032a58b86e252a4f68d2"} Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.284092 4698 generic.go:334] "Generic (PLEG): container finished" podID="8246a862-8262-4666-a47d-02815d416c74" containerID="97c4e4d29c7413be4428f6be1cdcf31582102328cea19b8d6e75c02aa4027b8d" exitCode=0 Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.284129 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c9f-account-create-7bztt" event={"ID":"8246a862-8262-4666-a47d-02815d416c74","Type":"ContainerDied","Data":"97c4e4d29c7413be4428f6be1cdcf31582102328cea19b8d6e75c02aa4027b8d"} Oct 14 10:16:25 crc kubenswrapper[4698]: I1014 10:16:25.284152 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c9f-account-create-7bztt" event={"ID":"8246a862-8262-4666-a47d-02815d416c74","Type":"ContainerStarted","Data":"88d57914ee1909d9d9b644f2b719284f3f9531b0a85e7bf096e2a8cb8aa9f747"} Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.302552 4698 generic.go:334] "Generic (PLEG): container finished" podID="b1ea6d75-caa9-42db-a423-330363435900" containerID="0e267cacdf9602d8239f4e1ecd038b2224edc23fdcc78e6bbbdc755711521c74" exitCode=0 Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.302639 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerDied","Data":"0e267cacdf9602d8239f4e1ecd038b2224edc23fdcc78e6bbbdc755711521c74"} Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.437862 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.547985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hwl4\" (UniqueName: \"kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548367 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548394 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548471 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.548495 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd\") pod \"b1ea6d75-caa9-42db-a423-330363435900\" (UID: \"b1ea6d75-caa9-42db-a423-330363435900\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.549252 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.556265 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts" (OuterVolumeSpecName: "scripts") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.556527 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.562112 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4" (OuterVolumeSpecName: "kube-api-access-7hwl4") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "kube-api-access-7hwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.585729 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.650972 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hwl4\" (UniqueName: \"kubernetes.io/projected/b1ea6d75-caa9-42db-a423-330363435900-kube-api-access-7hwl4\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.651015 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.651031 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.651042 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.651054 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1ea6d75-caa9-42db-a423-330363435900-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.691204 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.709747 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.734141 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data" (OuterVolumeSpecName: "config-data") pod "b1ea6d75-caa9-42db-a423-330363435900" (UID: "b1ea6d75-caa9-42db-a423-330363435900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.740731 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.747384 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.757608 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.757642 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ea6d75-caa9-42db-a423-330363435900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.859470 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4td9\" (UniqueName: \"kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9\") pod \"8246a862-8262-4666-a47d-02815d416c74\" (UID: \"8246a862-8262-4666-a47d-02815d416c74\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.859698 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv2b2\" (UniqueName: \"kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2\") pod \"d60a1dd8-3277-4ce0-8c70-706171496794\" (UID: \"d60a1dd8-3277-4ce0-8c70-706171496794\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.859868 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tjw4\" (UniqueName: \"kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4\") pod \"9adb550b-8d23-4ab3-b120-7640a29e36a5\" (UID: \"9adb550b-8d23-4ab3-b120-7640a29e36a5\") " Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.863571 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2" (OuterVolumeSpecName: "kube-api-access-gv2b2") pod "d60a1dd8-3277-4ce0-8c70-706171496794" (UID: "d60a1dd8-3277-4ce0-8c70-706171496794"). InnerVolumeSpecName "kube-api-access-gv2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.863634 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4" (OuterVolumeSpecName: "kube-api-access-4tjw4") pod "9adb550b-8d23-4ab3-b120-7640a29e36a5" (UID: "9adb550b-8d23-4ab3-b120-7640a29e36a5"). InnerVolumeSpecName "kube-api-access-4tjw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.863971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9" (OuterVolumeSpecName: "kube-api-access-k4td9") pod "8246a862-8262-4666-a47d-02815d416c74" (UID: "8246a862-8262-4666-a47d-02815d416c74"). InnerVolumeSpecName "kube-api-access-k4td9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.963751 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv2b2\" (UniqueName: \"kubernetes.io/projected/d60a1dd8-3277-4ce0-8c70-706171496794-kube-api-access-gv2b2\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.963821 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tjw4\" (UniqueName: \"kubernetes.io/projected/9adb550b-8d23-4ab3-b120-7640a29e36a5-kube-api-access-4tjw4\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:26 crc kubenswrapper[4698]: I1014 10:16:26.963835 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4td9\" (UniqueName: \"kubernetes.io/projected/8246a862-8262-4666-a47d-02815d416c74-kube-api-access-k4td9\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.313444 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-11fc-account-create-fk5wc" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.313439 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-11fc-account-create-fk5wc" event={"ID":"9adb550b-8d23-4ab3-b120-7640a29e36a5","Type":"ContainerDied","Data":"86103aedcfbc225548bf06e158ad5e045e230a07ca01032a58b86e252a4f68d2"} Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.313667 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86103aedcfbc225548bf06e158ad5e045e230a07ca01032a58b86e252a4f68d2" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.315135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c9f-account-create-7bztt" event={"ID":"8246a862-8262-4666-a47d-02815d416c74","Type":"ContainerDied","Data":"88d57914ee1909d9d9b644f2b719284f3f9531b0a85e7bf096e2a8cb8aa9f747"} Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.315173 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88d57914ee1909d9d9b644f2b719284f3f9531b0a85e7bf096e2a8cb8aa9f747" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.315218 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c9f-account-create-7bztt" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.320330 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1ea6d75-caa9-42db-a423-330363435900","Type":"ContainerDied","Data":"a35dfcfab1c2d9f50b0b8c89bdb25dabe478f454e53a95bdfec779f7e3e42032"} Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.320401 4698 scope.go:117] "RemoveContainer" containerID="009c61c99a39530c758cb7c57451abdf561a5943d56e84516f92deea1b7afc90" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.320347 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.324127 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4705-account-create-6pcpq" event={"ID":"d60a1dd8-3277-4ce0-8c70-706171496794","Type":"ContainerDied","Data":"cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22"} Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.324172 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0bebc2c248e5e8740099df0d295a9aefeba216e6026d27eacea6c14537aa22" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.324216 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4705-account-create-6pcpq" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.373506 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.381985 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.394311 4698 scope.go:117] "RemoveContainer" containerID="5201744bd85834481bd6ab5201d741a43b482e61f53b6f265b0c15fc0f92837e" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.418833 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419328 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adb550b-8d23-4ab3-b120-7640a29e36a5" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419341 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adb550b-8d23-4ab3-b120-7640a29e36a5" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419359 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="sg-core" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419365 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="sg-core" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419373 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8246a862-8262-4666-a47d-02815d416c74" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419379 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8246a862-8262-4666-a47d-02815d416c74" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419400 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-central-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419405 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-central-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419424 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60a1dd8-3277-4ce0-8c70-706171496794" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419431 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60a1dd8-3277-4ce0-8c70-706171496794" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419445 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="proxy-httpd" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419451 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="proxy-httpd" Oct 14 10:16:27 crc kubenswrapper[4698]: E1014 10:16:27.419466 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-notification-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419471 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-notification-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419671 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-notification-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419696 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="proxy-httpd" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419704 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="ceilometer-central-agent" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419714 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adb550b-8d23-4ab3-b120-7640a29e36a5" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419723 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ea6d75-caa9-42db-a423-330363435900" containerName="sg-core" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419733 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60a1dd8-3277-4ce0-8c70-706171496794" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.419742 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8246a862-8262-4666-a47d-02815d416c74" containerName="mariadb-account-create" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.426821 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.431735 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.437266 4698 scope.go:117] "RemoveContainer" containerID="10f76ba44b7503b470f0c691fc3da7c5f8beed8d8b9b0b361e0603dbe3a14cab" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.437726 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.463315 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.508967 4698 scope.go:117] "RemoveContainer" containerID="0e267cacdf9602d8239f4e1ecd038b2224edc23fdcc78e6bbbdc755711521c74" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.581840 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.581948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.582107 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.582161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.582202 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.582281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.582422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzln\" (UniqueName: \"kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.692629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694039 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694127 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694244 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694319 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.694441 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzln\" (UniqueName: \"kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.699518 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.703483 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.703790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.704132 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.707482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.713790 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.717636 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzln\" (UniqueName: \"kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln\") pod \"ceilometer-0\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " pod="openstack/ceilometer-0" Oct 14 10:16:27 crc kubenswrapper[4698]: I1014 10:16:27.779213 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.255325 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.262103 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.338529 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerStarted","Data":"87c35b7a6280920475a2945638b1e6ba958d1af70a6dc26aedfc4ea245854a21"} Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.891225 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dhgcp"] Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.893169 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.899106 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dhgcp"] Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.924545 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.925201 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 10:16:28 crc kubenswrapper[4698]: I1014 10:16:28.925725 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rkg48" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.024090 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqfg\" (UniqueName: \"kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.024576 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.024700 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.024788 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.035379 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ea6d75-caa9-42db-a423-330363435900" path="/var/lib/kubelet/pods/b1ea6d75-caa9-42db-a423-330363435900/volumes" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.126816 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.126888 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.126920 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.126994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqfg\" (UniqueName: \"kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.133393 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.133887 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.138724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.149714 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqfg\" (UniqueName: \"kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg\") pod \"nova-cell0-conductor-db-sync-dhgcp\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.262264 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:29 crc kubenswrapper[4698]: I1014 10:16:29.767615 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dhgcp"] Oct 14 10:16:30 crc kubenswrapper[4698]: I1014 10:16:30.253280 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 14 10:16:30 crc kubenswrapper[4698]: I1014 10:16:30.372050 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerStarted","Data":"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e"} Oct 14 10:16:30 crc kubenswrapper[4698]: I1014 10:16:30.372500 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerStarted","Data":"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913"} Oct 14 10:16:30 crc kubenswrapper[4698]: I1014 10:16:30.373081 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" event={"ID":"53f7e7ff-7e35-433a-a39f-556b716eaf21","Type":"ContainerStarted","Data":"5a0702ecc458735b8127c0de69018cbb560aed29f908c76ee8f6e577df2e6d7a"} Oct 14 10:16:31 crc kubenswrapper[4698]: I1014 10:16:31.397920 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerStarted","Data":"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252"} Oct 14 10:16:33 crc kubenswrapper[4698]: I1014 10:16:33.429270 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerStarted","Data":"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91"} Oct 14 10:16:33 crc kubenswrapper[4698]: I1014 10:16:33.430186 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:16:33 crc kubenswrapper[4698]: I1014 10:16:33.478094 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.584942669 podStartE2EDuration="6.478066348s" podCreationTimestamp="2025-10-14 10:16:27 +0000 UTC" firstStartedPulling="2025-10-14 10:16:28.261585983 +0000 UTC m=+1169.958885439" lastFinishedPulling="2025-10-14 10:16:32.154709692 +0000 UTC m=+1173.852009118" observedRunningTime="2025-10-14 10:16:33.465573073 +0000 UTC m=+1175.162872499" watchObservedRunningTime="2025-10-14 10:16:33.478066348 +0000 UTC m=+1175.175365764" Oct 14 10:16:38 crc kubenswrapper[4698]: I1014 10:16:38.491136 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" event={"ID":"53f7e7ff-7e35-433a-a39f-556b716eaf21","Type":"ContainerStarted","Data":"3a9edb6b1991449d9568664cd3002f692cc3dc57da901729986975ec1b9d5367"} Oct 14 10:16:38 crc kubenswrapper[4698]: I1014 10:16:38.534431 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" podStartSLOduration=2.905836856 podStartE2EDuration="10.534402015s" podCreationTimestamp="2025-10-14 10:16:28 +0000 UTC" firstStartedPulling="2025-10-14 10:16:29.779986162 +0000 UTC m=+1171.477285618" lastFinishedPulling="2025-10-14 10:16:37.408551341 +0000 UTC m=+1179.105850777" observedRunningTime="2025-10-14 10:16:38.518528613 +0000 UTC m=+1180.215828079" watchObservedRunningTime="2025-10-14 10:16:38.534402015 +0000 UTC m=+1180.231701461" Oct 14 10:16:47 crc kubenswrapper[4698]: I1014 10:16:47.610111 4698 generic.go:334] "Generic (PLEG): container finished" podID="53f7e7ff-7e35-433a-a39f-556b716eaf21" containerID="3a9edb6b1991449d9568664cd3002f692cc3dc57da901729986975ec1b9d5367" exitCode=0 Oct 14 10:16:47 crc kubenswrapper[4698]: I1014 10:16:47.610201 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" event={"ID":"53f7e7ff-7e35-433a-a39f-556b716eaf21","Type":"ContainerDied","Data":"3a9edb6b1991449d9568664cd3002f692cc3dc57da901729986975ec1b9d5367"} Oct 14 10:16:48 crc kubenswrapper[4698]: I1014 10:16:48.971667 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.023497 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqfg\" (UniqueName: \"kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg\") pod \"53f7e7ff-7e35-433a-a39f-556b716eaf21\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.023553 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle\") pod \"53f7e7ff-7e35-433a-a39f-556b716eaf21\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.023819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts\") pod \"53f7e7ff-7e35-433a-a39f-556b716eaf21\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.023884 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data\") pod \"53f7e7ff-7e35-433a-a39f-556b716eaf21\" (UID: \"53f7e7ff-7e35-433a-a39f-556b716eaf21\") " Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.030391 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg" (OuterVolumeSpecName: "kube-api-access-bgqfg") pod "53f7e7ff-7e35-433a-a39f-556b716eaf21" (UID: "53f7e7ff-7e35-433a-a39f-556b716eaf21"). InnerVolumeSpecName "kube-api-access-bgqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.033995 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts" (OuterVolumeSpecName: "scripts") pod "53f7e7ff-7e35-433a-a39f-556b716eaf21" (UID: "53f7e7ff-7e35-433a-a39f-556b716eaf21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.055496 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data" (OuterVolumeSpecName: "config-data") pod "53f7e7ff-7e35-433a-a39f-556b716eaf21" (UID: "53f7e7ff-7e35-433a-a39f-556b716eaf21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.065754 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f7e7ff-7e35-433a-a39f-556b716eaf21" (UID: "53f7e7ff-7e35-433a-a39f-556b716eaf21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.126532 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.126570 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.126580 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqfg\" (UniqueName: \"kubernetes.io/projected/53f7e7ff-7e35-433a-a39f-556b716eaf21-kube-api-access-bgqfg\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.126592 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f7e7ff-7e35-433a-a39f-556b716eaf21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.633909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" event={"ID":"53f7e7ff-7e35-433a-a39f-556b716eaf21","Type":"ContainerDied","Data":"5a0702ecc458735b8127c0de69018cbb560aed29f908c76ee8f6e577df2e6d7a"} Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.633974 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dhgcp" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.634005 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0702ecc458735b8127c0de69018cbb560aed29f908c76ee8f6e577df2e6d7a" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.759273 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 10:16:49 crc kubenswrapper[4698]: E1014 10:16:49.761458 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f7e7ff-7e35-433a-a39f-556b716eaf21" containerName="nova-cell0-conductor-db-sync" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.761488 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f7e7ff-7e35-433a-a39f-556b716eaf21" containerName="nova-cell0-conductor-db-sync" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.761686 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f7e7ff-7e35-433a-a39f-556b716eaf21" containerName="nova-cell0-conductor-db-sync" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.762617 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.770005 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.770113 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rkg48" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.774691 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.840029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.840133 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7g9x\" (UniqueName: \"kubernetes.io/projected/636dfb32-7180-4af9-9de0-57745de8c7e7-kube-api-access-d7g9x\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.840214 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.942728 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7g9x\" (UniqueName: \"kubernetes.io/projected/636dfb32-7180-4af9-9de0-57745de8c7e7-kube-api-access-d7g9x\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.942870 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.942980 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.960653 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.961464 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636dfb32-7180-4af9-9de0-57745de8c7e7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:49 crc kubenswrapper[4698]: I1014 10:16:49.964392 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7g9x\" (UniqueName: \"kubernetes.io/projected/636dfb32-7180-4af9-9de0-57745de8c7e7-kube-api-access-d7g9x\") pod \"nova-cell0-conductor-0\" (UID: \"636dfb32-7180-4af9-9de0-57745de8c7e7\") " pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:50 crc kubenswrapper[4698]: I1014 10:16:50.095127 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:50 crc kubenswrapper[4698]: I1014 10:16:50.622032 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 10:16:50 crc kubenswrapper[4698]: W1014 10:16:50.623647 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636dfb32_7180_4af9_9de0_57745de8c7e7.slice/crio-cd8b0cb6f5edd0fb0be85d95a255978eec6b7966cf1b7e2bbf07c38abad76695 WatchSource:0}: Error finding container cd8b0cb6f5edd0fb0be85d95a255978eec6b7966cf1b7e2bbf07c38abad76695: Status 404 returned error can't find the container with id cd8b0cb6f5edd0fb0be85d95a255978eec6b7966cf1b7e2bbf07c38abad76695 Oct 14 10:16:50 crc kubenswrapper[4698]: I1014 10:16:50.647407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"636dfb32-7180-4af9-9de0-57745de8c7e7","Type":"ContainerStarted","Data":"cd8b0cb6f5edd0fb0be85d95a255978eec6b7966cf1b7e2bbf07c38abad76695"} Oct 14 10:16:51 crc kubenswrapper[4698]: I1014 10:16:51.659605 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"636dfb32-7180-4af9-9de0-57745de8c7e7","Type":"ContainerStarted","Data":"3cd57bb7bff4a8e48a713c1c27c72bdc96a793d7cd68d90c01fcff542f5c7eca"} Oct 14 10:16:51 crc kubenswrapper[4698]: I1014 10:16:51.660145 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:51 crc kubenswrapper[4698]: I1014 10:16:51.680101 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.680061899 podStartE2EDuration="2.680061899s" podCreationTimestamp="2025-10-14 10:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:16:51.67799862 +0000 UTC m=+1193.375298046" watchObservedRunningTime="2025-10-14 10:16:51.680061899 +0000 UTC m=+1193.377361365" Oct 14 10:16:53 crc kubenswrapper[4698]: I1014 10:16:53.908718 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:16:53 crc kubenswrapper[4698]: I1014 10:16:53.909256 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.138960 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.636149 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7xsm4"] Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.638571 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.643166 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.645813 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.657746 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xsm4"] Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.704726 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.704813 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.704848 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sv4d\" (UniqueName: \"kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.704934 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.807096 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.807138 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.807162 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sv4d\" (UniqueName: \"kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.807219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.814292 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.818220 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.828031 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.829911 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sv4d\" (UniqueName: \"kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d\") pod \"nova-cell0-cell-mapping-7xsm4\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.862900 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.864492 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.867445 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.872718 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.909366 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767bl\" (UniqueName: \"kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.909878 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.910076 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.976176 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.983428 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.985839 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:16:55 crc kubenswrapper[4698]: I1014 10:16:55.997992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.018375 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.023670 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.023818 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.024077 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkmg\" (UniqueName: \"kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.024121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.024177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.024540 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.024679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767bl\" (UniqueName: \"kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.031656 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.034569 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.037084 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.042775 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.046206 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767bl\" (UniqueName: \"kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl\") pod \"nova-scheduler-0\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.052749 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.055883 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.129675 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135104 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135221 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135354 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135563 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65slt\" (UniqueName: \"kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135648 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkmg\" (UniqueName: \"kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.135848 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.136643 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.143037 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.148091 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.150647 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.156561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkmg\" (UniqueName: \"kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.172417 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data\") pod \"nova-metadata-0\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.188955 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.199135 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.200726 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.203115 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.214069 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.231285 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237611 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237678 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237715 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqkr\" (UniqueName: \"kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237751 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237799 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237841 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237891 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65slt\" (UniqueName: \"kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237914 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w5m6\" (UniqueName: \"kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.237948 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.238010 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.238139 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.238192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.243691 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.243818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.266002 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65slt\" (UniqueName: \"kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.278403 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqkr\" (UniqueName: \"kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342786 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342854 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w5m6\" (UniqueName: \"kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342885 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342923 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.342973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.345677 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.348150 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.349081 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.349715 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.350551 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.351947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.355341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.368516 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqkr\" (UniqueName: \"kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr\") pod \"dnsmasq-dns-7d5fbbb8c5-r5hn4\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.372096 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w5m6\" (UniqueName: \"kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.442437 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.501373 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.550011 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.576352 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.642255 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xsm4"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.736370 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xsm4" event={"ID":"cb2cd329-5063-4f5b-8903-02fdfce19aca","Type":"ContainerStarted","Data":"06d63b45bbfed1d4fc9076f6f2c07f46647a916b96dde74012eb1adf1d6a4817"} Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.834316 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.961681 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wlp"] Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.964257 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.968172 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.968567 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 10:16:56 crc kubenswrapper[4698]: I1014 10:16:56.990882 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wlp"] Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.081285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.081400 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h84v\" (UniqueName: \"kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.081530 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.081604 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.088865 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.172349 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.185370 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.185502 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.185549 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.185589 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h84v\" (UniqueName: \"kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.193377 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.193619 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.199444 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.207326 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h84v\" (UniqueName: \"kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v\") pod \"nova-cell1-conductor-db-sync-z5wlp\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.292045 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.292903 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:16:57 crc kubenswrapper[4698]: W1014 10:16:57.300739 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b3c137_fbdb_45dc_b8a3_43e5cca369a3.slice/crio-4f77ee96a9513b85567559043b561a0d29a2109c07107c1b282e735f09594a04 WatchSource:0}: Error finding container 4f77ee96a9513b85567559043b561a0d29a2109c07107c1b282e735f09594a04: Status 404 returned error can't find the container with id 4f77ee96a9513b85567559043b561a0d29a2109c07107c1b282e735f09594a04 Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.301750 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.747389 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26c376ac-2053-48a2-8762-754477dfbdff","Type":"ContainerStarted","Data":"609c1e43354225cbd2767fb012f7970aa59bb6e84158edc07113cbf0ce8231d4"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.749218 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xsm4" event={"ID":"cb2cd329-5063-4f5b-8903-02fdfce19aca","Type":"ContainerStarted","Data":"5c1066f88fb585f884296272a7048feeebf77a1f3f7e3dce70e07b04d1be0841"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.750333 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7","Type":"ContainerStarted","Data":"2eccdcb9a37e4c7fb7b1d9389e08e8fb8cc4d04c53823d6f4c28140c73f7fc44"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.751945 4698 generic.go:334] "Generic (PLEG): container finished" podID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerID="aa4b7f514ddcca9a15b741fa483e1986c9fe9b7b01aa11654ac6b544fbaf8d97" exitCode=0 Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.752042 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" event={"ID":"1091b708-e8d5-47b7-bc52-dfa7bf55e441","Type":"ContainerDied","Data":"aa4b7f514ddcca9a15b741fa483e1986c9fe9b7b01aa11654ac6b544fbaf8d97"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.752082 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" event={"ID":"1091b708-e8d5-47b7-bc52-dfa7bf55e441","Type":"ContainerStarted","Data":"a9638fb0a356265e3e504e4fc2b9a3825bc76b1ccfa38496264f088c5097701a"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.753309 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerStarted","Data":"7270b9ddd547b5ff9094fa46379fb8b17ae6faa9e7f3f1f866f979d471685d00"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.757506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerStarted","Data":"4f77ee96a9513b85567559043b561a0d29a2109c07107c1b282e735f09594a04"} Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.772088 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7xsm4" podStartSLOduration=2.772064242 podStartE2EDuration="2.772064242s" podCreationTimestamp="2025-10-14 10:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:16:57.761924094 +0000 UTC m=+1199.459223530" watchObservedRunningTime="2025-10-14 10:16:57.772064242 +0000 UTC m=+1199.469363658" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.795680 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 10:16:57 crc kubenswrapper[4698]: I1014 10:16:57.811390 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wlp"] Oct 14 10:16:57 crc kubenswrapper[4698]: W1014 10:16:57.820422 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80807b13_41b4_4c40_9acb_a84851f3595f.slice/crio-06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865 WatchSource:0}: Error finding container 06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865: Status 404 returned error can't find the container with id 06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865 Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.778627 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" event={"ID":"80807b13-41b4-4c40-9acb-a84851f3595f","Type":"ContainerStarted","Data":"9822da67473007e557874afff68b151887144fb73ce6c21a5dc2d59dedbe6b2a"} Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.779294 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" event={"ID":"80807b13-41b4-4c40-9acb-a84851f3595f","Type":"ContainerStarted","Data":"06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865"} Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.787541 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" event={"ID":"1091b708-e8d5-47b7-bc52-dfa7bf55e441","Type":"ContainerStarted","Data":"a4dd8177274e715b4a3488d7c4166628c3a6b059a00114fa66bd797cbb6e97b5"} Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.787922 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.797431 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" podStartSLOduration=2.797410454 podStartE2EDuration="2.797410454s" podCreationTimestamp="2025-10-14 10:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:16:58.793852832 +0000 UTC m=+1200.491152268" watchObservedRunningTime="2025-10-14 10:16:58.797410454 +0000 UTC m=+1200.494709870" Oct 14 10:16:58 crc kubenswrapper[4698]: I1014 10:16:58.818243 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" podStartSLOduration=2.818214326 podStartE2EDuration="2.818214326s" podCreationTimestamp="2025-10-14 10:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:16:58.814923042 +0000 UTC m=+1200.512222478" watchObservedRunningTime="2025-10-14 10:16:58.818214326 +0000 UTC m=+1200.515513742" Oct 14 10:16:59 crc kubenswrapper[4698]: I1014 10:16:59.694664 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:16:59 crc kubenswrapper[4698]: I1014 10:16:59.722247 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:00 crc kubenswrapper[4698]: I1014 10:17:00.840863 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerStarted","Data":"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.867583 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerStarted","Data":"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.878929 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-log" containerID="cri-o://8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" gracePeriod=30 Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.878979 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-metadata" containerID="cri-o://7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" gracePeriod=30 Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.878614 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerStarted","Data":"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.879077 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerStarted","Data":"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.882419 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26c376ac-2053-48a2-8762-754477dfbdff","Type":"ContainerStarted","Data":"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.882595 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="26c376ac-2053-48a2-8762-754477dfbdff" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093" gracePeriod=30 Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.892717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7","Type":"ContainerStarted","Data":"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103"} Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.904422 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5631907050000002 podStartE2EDuration="6.904381601s" podCreationTimestamp="2025-10-14 10:16:55 +0000 UTC" firstStartedPulling="2025-10-14 10:16:57.092914856 +0000 UTC m=+1198.790214272" lastFinishedPulling="2025-10-14 10:17:00.434105732 +0000 UTC m=+1202.131405168" observedRunningTime="2025-10-14 10:17:01.895797206 +0000 UTC m=+1203.593096632" watchObservedRunningTime="2025-10-14 10:17:01.904381601 +0000 UTC m=+1203.601681007" Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.929675 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.804095744 podStartE2EDuration="6.92965631s" podCreationTimestamp="2025-10-14 10:16:55 +0000 UTC" firstStartedPulling="2025-10-14 10:16:57.308546276 +0000 UTC m=+1199.005845692" lastFinishedPulling="2025-10-14 10:17:00.434106842 +0000 UTC m=+1202.131406258" observedRunningTime="2025-10-14 10:17:01.922427725 +0000 UTC m=+1203.619727181" watchObservedRunningTime="2025-10-14 10:17:01.92965631 +0000 UTC m=+1203.626955726" Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.949494 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.366012772 podStartE2EDuration="6.949471355s" podCreationTimestamp="2025-10-14 10:16:55 +0000 UTC" firstStartedPulling="2025-10-14 10:16:56.851418521 +0000 UTC m=+1198.548717937" lastFinishedPulling="2025-10-14 10:17:00.434877094 +0000 UTC m=+1202.132176520" observedRunningTime="2025-10-14 10:17:01.942291 +0000 UTC m=+1203.639590406" watchObservedRunningTime="2025-10-14 10:17:01.949471355 +0000 UTC m=+1203.646770761" Oct 14 10:17:01 crc kubenswrapper[4698]: I1014 10:17:01.969579 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.828946291 podStartE2EDuration="5.969552276s" podCreationTimestamp="2025-10-14 10:16:56 +0000 UTC" firstStartedPulling="2025-10-14 10:16:57.292621912 +0000 UTC m=+1198.989921328" lastFinishedPulling="2025-10-14 10:17:00.433227887 +0000 UTC m=+1202.130527313" observedRunningTime="2025-10-14 10:17:01.959875691 +0000 UTC m=+1203.657175117" watchObservedRunningTime="2025-10-14 10:17:01.969552276 +0000 UTC m=+1203.666851692" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.478779 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.555713 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs\") pod \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.555792 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle\") pod \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.555838 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data\") pod \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.555884 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkmg\" (UniqueName: \"kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg\") pod \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\" (UID: \"71b3c137-fbdb-45dc-b8a3-43e5cca369a3\") " Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.555983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs" (OuterVolumeSpecName: "logs") pod "71b3c137-fbdb-45dc-b8a3-43e5cca369a3" (UID: "71b3c137-fbdb-45dc-b8a3-43e5cca369a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.556866 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.561282 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg" (OuterVolumeSpecName: "kube-api-access-dbkmg") pod "71b3c137-fbdb-45dc-b8a3-43e5cca369a3" (UID: "71b3c137-fbdb-45dc-b8a3-43e5cca369a3"). InnerVolumeSpecName "kube-api-access-dbkmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.593451 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data" (OuterVolumeSpecName: "config-data") pod "71b3c137-fbdb-45dc-b8a3-43e5cca369a3" (UID: "71b3c137-fbdb-45dc-b8a3-43e5cca369a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.601808 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b3c137-fbdb-45dc-b8a3-43e5cca369a3" (UID: "71b3c137-fbdb-45dc-b8a3-43e5cca369a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.658816 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.658862 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkmg\" (UniqueName: \"kubernetes.io/projected/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-kube-api-access-dbkmg\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.658884 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3c137-fbdb-45dc-b8a3-43e5cca369a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.798695 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.799017 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" containerName="kube-state-metrics" containerID="cri-o://ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9" gracePeriod=30 Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.910999 4698 generic.go:334] "Generic (PLEG): container finished" podID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerID="7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" exitCode=0 Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911340 4698 generic.go:334] "Generic (PLEG): container finished" podID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerID="8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" exitCode=143 Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911069 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerDied","Data":"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100"} Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerDied","Data":"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763"} Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911465 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"71b3c137-fbdb-45dc-b8a3-43e5cca369a3","Type":"ContainerDied","Data":"4f77ee96a9513b85567559043b561a0d29a2109c07107c1b282e735f09594a04"} Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911494 4698 scope.go:117] "RemoveContainer" containerID="7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" Oct 14 10:17:02 crc kubenswrapper[4698]: I1014 10:17:02.911057 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.000055 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.004937 4698 scope.go:117] "RemoveContainer" containerID="8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.012031 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.070463 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" path="/var/lib/kubelet/pods/71b3c137-fbdb-45dc-b8a3-43e5cca369a3/volumes" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.072601 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:03 crc kubenswrapper[4698]: E1014 10:17:03.083220 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-log" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.083264 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-log" Oct 14 10:17:03 crc kubenswrapper[4698]: E1014 10:17:03.083301 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-metadata" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.083311 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-metadata" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.083796 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-log" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.083858 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b3c137-fbdb-45dc-b8a3-43e5cca369a3" containerName="nova-metadata-metadata" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.085228 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.085366 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.088204 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.088726 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.172048 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.172113 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.172164 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.172215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtvf\" (UniqueName: \"kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.172260 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.219519 4698 scope.go:117] "RemoveContainer" containerID="7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" Oct 14 10:17:03 crc kubenswrapper[4698]: E1014 10:17:03.222802 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100\": container with ID starting with 7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100 not found: ID does not exist" containerID="7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.222862 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100"} err="failed to get container status \"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100\": rpc error: code = NotFound desc = could not find container \"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100\": container with ID starting with 7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100 not found: ID does not exist" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.222894 4698 scope.go:117] "RemoveContainer" containerID="8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" Oct 14 10:17:03 crc kubenswrapper[4698]: E1014 10:17:03.226361 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763\": container with ID starting with 8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763 not found: ID does not exist" containerID="8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.226408 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763"} err="failed to get container status \"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763\": rpc error: code = NotFound desc = could not find container \"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763\": container with ID starting with 8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763 not found: ID does not exist" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.226441 4698 scope.go:117] "RemoveContainer" containerID="7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.230024 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100"} err="failed to get container status \"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100\": rpc error: code = NotFound desc = could not find container \"7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100\": container with ID starting with 7d5d8e06d4a752da7d47520a9e6e849b0a0700169071b85f21b870e755e5b100 not found: ID does not exist" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.230074 4698 scope.go:117] "RemoveContainer" containerID="8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.230470 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763"} err="failed to get container status \"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763\": rpc error: code = NotFound desc = could not find container \"8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763\": container with ID starting with 8e1215293b29e54f7bc64a8eaa2840a06c70508292e8731466c8f7d02d3ff763 not found: ID does not exist" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.273331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.273435 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.273474 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.273522 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.273584 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngtvf\" (UniqueName: \"kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.276515 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.278352 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.279308 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.293164 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.294097 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngtvf\" (UniqueName: \"kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf\") pod \"nova-metadata-0\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.405725 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.408464 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.478547 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzq9\" (UniqueName: \"kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9\") pod \"6702faf6-e3b2-44f8-a033-ba5fd85af368\" (UID: \"6702faf6-e3b2-44f8-a033-ba5fd85af368\") " Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.488188 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9" (OuterVolumeSpecName: "kube-api-access-jmzq9") pod "6702faf6-e3b2-44f8-a033-ba5fd85af368" (UID: "6702faf6-e3b2-44f8-a033-ba5fd85af368"). InnerVolumeSpecName "kube-api-access-jmzq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.585447 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzq9\" (UniqueName: \"kubernetes.io/projected/6702faf6-e3b2-44f8-a033-ba5fd85af368-kube-api-access-jmzq9\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.940115 4698 generic.go:334] "Generic (PLEG): container finished" podID="6702faf6-e3b2-44f8-a033-ba5fd85af368" containerID="ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9" exitCode=2 Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.940195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6702faf6-e3b2-44f8-a033-ba5fd85af368","Type":"ContainerDied","Data":"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9"} Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.940243 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6702faf6-e3b2-44f8-a033-ba5fd85af368","Type":"ContainerDied","Data":"6838ec9cc63bb63b152ad9e3706cacc4c43f2ff68c440a16808ae272e9331044"} Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.940274 4698 scope.go:117] "RemoveContainer" containerID="ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9" Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.940548 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:17:03 crc kubenswrapper[4698]: W1014 10:17:03.942392 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e29685_d5d9_40ae_8715_6d7a75e252cc.slice/crio-61203d04bdfe08a8faa01618f11641a71127f0139548ae7138e833a0b9be83b7 WatchSource:0}: Error finding container 61203d04bdfe08a8faa01618f11641a71127f0139548ae7138e833a0b9be83b7: Status 404 returned error can't find the container with id 61203d04bdfe08a8faa01618f11641a71127f0139548ae7138e833a0b9be83b7 Oct 14 10:17:03 crc kubenswrapper[4698]: I1014 10:17:03.945137 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.012925 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.014002 4698 scope.go:117] "RemoveContainer" containerID="ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9" Oct 14 10:17:04 crc kubenswrapper[4698]: E1014 10:17:04.017583 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9\": container with ID starting with ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9 not found: ID does not exist" containerID="ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.017647 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9"} err="failed to get container status \"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9\": rpc error: code = NotFound desc = could not find container \"ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9\": container with ID starting with ac79af97500d7b1d807b20a73e35f673d5d1b199a4225d89ca856b8133da57d9 not found: ID does not exist" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.033409 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.054933 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: E1014 10:17:04.055701 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" containerName="kube-state-metrics" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.055874 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" containerName="kube-state-metrics" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.056208 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" containerName="kube-state-metrics" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.057182 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.060693 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.061209 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.082601 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.201545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.202285 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.202312 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.202369 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kvd\" (UniqueName: \"kubernetes.io/projected/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-api-access-x8kvd\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.304591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.304636 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.304673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kvd\" (UniqueName: \"kubernetes.io/projected/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-api-access-x8kvd\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.304733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.308545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.313462 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.313860 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.330788 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kvd\" (UniqueName: \"kubernetes.io/projected/e4f715a0-2f1f-4831-a8ce-a629264ac73f-kube-api-access-x8kvd\") pod \"kube-state-metrics-0\" (UID: \"e4f715a0-2f1f-4831-a8ce-a629264ac73f\") " pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.456847 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.952745 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerStarted","Data":"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2"} Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.953579 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerStarted","Data":"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895"} Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.953596 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerStarted","Data":"61203d04bdfe08a8faa01618f11641a71127f0139548ae7138e833a0b9be83b7"} Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.969900 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 10:17:04 crc kubenswrapper[4698]: W1014 10:17:04.971039 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f715a0_2f1f_4831_a8ce_a629264ac73f.slice/crio-cb2bdec65ca66d23f0e308ba22e195df0b3b9634a6c5f2eda8422caffe60a76b WatchSource:0}: Error finding container cb2bdec65ca66d23f0e308ba22e195df0b3b9634a6c5f2eda8422caffe60a76b: Status 404 returned error can't find the container with id cb2bdec65ca66d23f0e308ba22e195df0b3b9634a6c5f2eda8422caffe60a76b Oct 14 10:17:04 crc kubenswrapper[4698]: I1014 10:17:04.991657 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.991626787 podStartE2EDuration="1.991626787s" podCreationTimestamp="2025-10-14 10:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:04.981138049 +0000 UTC m=+1206.678437475" watchObservedRunningTime="2025-10-14 10:17:04.991626787 +0000 UTC m=+1206.688926203" Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.039581 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6702faf6-e3b2-44f8-a033-ba5fd85af368" path="/var/lib/kubelet/pods/6702faf6-e3b2-44f8-a033-ba5fd85af368/volumes" Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.040472 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.040965 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="sg-core" containerID="cri-o://547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252" gracePeriod=30 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.041059 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="proxy-httpd" containerID="cri-o://7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91" gracePeriod=30 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.041153 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-central-agent" containerID="cri-o://70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e" gracePeriod=30 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.041059 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-notification-agent" containerID="cri-o://fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913" gracePeriod=30 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.967185 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4f715a0-2f1f-4831-a8ce-a629264ac73f","Type":"ContainerStarted","Data":"6c9eb565cf2854e7a855ebf153662b3e5553bd911d105e17ddf6d281fb4a19ae"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.967874 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.967922 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e4f715a0-2f1f-4831-a8ce-a629264ac73f","Type":"ContainerStarted","Data":"cb2bdec65ca66d23f0e308ba22e195df0b3b9634a6c5f2eda8422caffe60a76b"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970226 4698 generic.go:334] "Generic (PLEG): container finished" podID="dab75fc7-2444-45a1-89c3-36aa14697520" containerID="7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91" exitCode=0 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970265 4698 generic.go:334] "Generic (PLEG): container finished" podID="dab75fc7-2444-45a1-89c3-36aa14697520" containerID="547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252" exitCode=2 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970280 4698 generic.go:334] "Generic (PLEG): container finished" podID="dab75fc7-2444-45a1-89c3-36aa14697520" containerID="70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e" exitCode=0 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970281 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerDied","Data":"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970341 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerDied","Data":"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.970358 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerDied","Data":"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.972189 4698 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd329-5063-4f5b-8903-02fdfce19aca" containerID="5c1066f88fb585f884296272a7048feeebf77a1f3f7e3dce70e07b04d1be0841" exitCode=0 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.972283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xsm4" event={"ID":"cb2cd329-5063-4f5b-8903-02fdfce19aca","Type":"ContainerDied","Data":"5c1066f88fb585f884296272a7048feeebf77a1f3f7e3dce70e07b04d1be0841"} Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.975205 4698 generic.go:334] "Generic (PLEG): container finished" podID="80807b13-41b4-4c40-9acb-a84851f3595f" containerID="9822da67473007e557874afff68b151887144fb73ce6c21a5dc2d59dedbe6b2a" exitCode=0 Oct 14 10:17:05 crc kubenswrapper[4698]: I1014 10:17:05.975315 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" event={"ID":"80807b13-41b4-4c40-9acb-a84851f3595f","Type":"ContainerDied","Data":"9822da67473007e557874afff68b151887144fb73ce6c21a5dc2d59dedbe6b2a"} Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.002199 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.607471931 podStartE2EDuration="2.002168207s" podCreationTimestamp="2025-10-14 10:17:04 +0000 UTC" firstStartedPulling="2025-10-14 10:17:04.980906082 +0000 UTC m=+1206.678205498" lastFinishedPulling="2025-10-14 10:17:05.375602358 +0000 UTC m=+1207.072901774" observedRunningTime="2025-10-14 10:17:05.991658088 +0000 UTC m=+1207.688957544" watchObservedRunningTime="2025-10-14 10:17:06.002168207 +0000 UTC m=+1207.699467623" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.232335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.232381 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.261638 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.487124 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.503713 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.503826 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.552887 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.562871 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.562999 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztzln\" (UniqueName: \"kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.563036 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.563097 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.563180 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.563194 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.563211 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data\") pod \"dab75fc7-2444-45a1-89c3-36aa14697520\" (UID: \"dab75fc7-2444-45a1-89c3-36aa14697520\") " Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.564607 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.565404 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.572738 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts" (OuterVolumeSpecName: "scripts") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.577688 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.578106 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln" (OuterVolumeSpecName: "kube-api-access-ztzln") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "kube-api-access-ztzln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.629001 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.631240 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.632848 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="dnsmasq-dns" containerID="cri-o://63b423609078a3f518917b93787cc20d0955c23665e5aaa1cd8fb65bb08f8d5a" gracePeriod=10 Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.664431 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.664463 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.664475 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.664484 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztzln\" (UniqueName: \"kubernetes.io/projected/dab75fc7-2444-45a1-89c3-36aa14697520-kube-api-access-ztzln\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.664494 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab75fc7-2444-45a1-89c3-36aa14697520-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.693943 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.772378 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.857967 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data" (OuterVolumeSpecName: "config-data") pod "dab75fc7-2444-45a1-89c3-36aa14697520" (UID: "dab75fc7-2444-45a1-89c3-36aa14697520"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:06 crc kubenswrapper[4698]: I1014 10:17:06.879239 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab75fc7-2444-45a1-89c3-36aa14697520-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.090425 4698 generic.go:334] "Generic (PLEG): container finished" podID="dab75fc7-2444-45a1-89c3-36aa14697520" containerID="fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913" exitCode=0 Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.090608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerDied","Data":"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913"} Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.090672 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.090697 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab75fc7-2444-45a1-89c3-36aa14697520","Type":"ContainerDied","Data":"87c35b7a6280920475a2945638b1e6ba958d1af70a6dc26aedfc4ea245854a21"} Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.090719 4698 scope.go:117] "RemoveContainer" containerID="7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.104089 4698 generic.go:334] "Generic (PLEG): container finished" podID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerID="63b423609078a3f518917b93787cc20d0955c23665e5aaa1cd8fb65bb08f8d5a" exitCode=0 Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.105128 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" event={"ID":"a3ae3570-e56e-4c49-ad97-56e83b3f9d01","Type":"ContainerDied","Data":"63b423609078a3f518917b93787cc20d0955c23665e5aaa1cd8fb65bb08f8d5a"} Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.135901 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.136236 4698 scope.go:117] "RemoveContainer" containerID="547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.156740 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.165225 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.168257 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.168742 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="sg-core" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.168772 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="sg-core" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.168794 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="proxy-httpd" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.168801 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="proxy-httpd" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.168839 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-central-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.168845 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-central-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.168863 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-notification-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.168869 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-notification-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.169050 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-central-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.169068 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="sg-core" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.169084 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="proxy-httpd" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.169103 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" containerName="ceilometer-notification-agent" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.175234 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.177133 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.178068 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.181808 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.178741 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.181831 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.211115 4698 scope.go:117] "RemoveContainer" containerID="fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rqs\" (UniqueName: \"kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297561 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297653 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297681 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.297908 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config\") pod \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\" (UID: \"a3ae3570-e56e-4c49-ad97-56e83b3f9d01\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298260 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298310 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298510 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nj74\" (UniqueName: \"kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298558 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.298875 4698 scope.go:117] "RemoveContainer" containerID="70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.303631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs" (OuterVolumeSpecName: "kube-api-access-q5rqs") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "kube-api-access-q5rqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.372869 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config" (OuterVolumeSpecName: "config") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.373038 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.374969 4698 scope.go:117] "RemoveContainer" containerID="7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.375939 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91\": container with ID starting with 7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91 not found: ID does not exist" containerID="7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.375979 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91"} err="failed to get container status \"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91\": rpc error: code = NotFound desc = could not find container \"7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91\": container with ID starting with 7342987f15772bc9cda72e3b2eba7e4d19f494567d17d2fdedeb083ee629ce91 not found: ID does not exist" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376006 4698 scope.go:117] "RemoveContainer" containerID="547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.376314 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252\": container with ID starting with 547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252 not found: ID does not exist" containerID="547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376337 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252"} err="failed to get container status \"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252\": rpc error: code = NotFound desc = could not find container \"547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252\": container with ID starting with 547a9301662b97ec076acd5de55e87004d2bcf7f856e69b52ad23858bed94252 not found: ID does not exist" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376352 4698 scope.go:117] "RemoveContainer" containerID="fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.376558 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913\": container with ID starting with fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913 not found: ID does not exist" containerID="fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376576 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913"} err="failed to get container status \"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913\": rpc error: code = NotFound desc = could not find container \"fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913\": container with ID starting with fddae4fa34ea2f5d2e915e4d998e0d4a93bc698506cec4fa4279342d9f92d913 not found: ID does not exist" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376590 4698 scope.go:117] "RemoveContainer" containerID="70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e" Oct 14 10:17:07 crc kubenswrapper[4698]: E1014 10:17:07.376746 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e\": container with ID starting with 70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e not found: ID does not exist" containerID="70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.376775 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e"} err="failed to get container status \"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e\": rpc error: code = NotFound desc = could not find container \"70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e\": container with ID starting with 70a28226fe6142c9103ef6f7e138e399148d2012273cde9a0bd2764c9ae49f2e not found: ID does not exist" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402603 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nj74\" (UniqueName: \"kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402679 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402717 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402733 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402752 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402803 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402910 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402966 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402981 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.402992 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rqs\" (UniqueName: \"kubernetes.io/projected/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-kube-api-access-q5rqs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.403409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.403721 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.410556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.410585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.410609 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.410680 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.412288 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.420545 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nj74\" (UniqueName: \"kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74\") pod \"ceilometer-0\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.429534 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.473921 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.475008 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3ae3570-e56e-4c49-ad97-56e83b3f9d01" (UID: "a3ae3570-e56e-4c49-ad97-56e83b3f9d01"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.476792 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.504826 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.504879 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.504899 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ae3570-e56e-4c49-ad97-56e83b3f9d01-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.514740 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.545798 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.589126 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.606352 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data\") pod \"cb2cd329-5063-4f5b-8903-02fdfce19aca\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.606497 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle\") pod \"cb2cd329-5063-4f5b-8903-02fdfce19aca\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.606659 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sv4d\" (UniqueName: \"kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d\") pod \"cb2cd329-5063-4f5b-8903-02fdfce19aca\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.606746 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts\") pod \"cb2cd329-5063-4f5b-8903-02fdfce19aca\" (UID: \"cb2cd329-5063-4f5b-8903-02fdfce19aca\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.611533 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts" (OuterVolumeSpecName: "scripts") pod "cb2cd329-5063-4f5b-8903-02fdfce19aca" (UID: "cb2cd329-5063-4f5b-8903-02fdfce19aca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.616816 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d" (OuterVolumeSpecName: "kube-api-access-4sv4d") pod "cb2cd329-5063-4f5b-8903-02fdfce19aca" (UID: "cb2cd329-5063-4f5b-8903-02fdfce19aca"). InnerVolumeSpecName "kube-api-access-4sv4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.658334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data" (OuterVolumeSpecName: "config-data") pod "cb2cd329-5063-4f5b-8903-02fdfce19aca" (UID: "cb2cd329-5063-4f5b-8903-02fdfce19aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.696412 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb2cd329-5063-4f5b-8903-02fdfce19aca" (UID: "cb2cd329-5063-4f5b-8903-02fdfce19aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.710842 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.710871 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.710891 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sv4d\" (UniqueName: \"kubernetes.io/projected/cb2cd329-5063-4f5b-8903-02fdfce19aca-kube-api-access-4sv4d\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.710901 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2cd329-5063-4f5b-8903-02fdfce19aca-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.793646 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.917673 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data\") pod \"80807b13-41b4-4c40-9acb-a84851f3595f\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.917723 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts\") pod \"80807b13-41b4-4c40-9acb-a84851f3595f\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.917843 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle\") pod \"80807b13-41b4-4c40-9acb-a84851f3595f\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.917920 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h84v\" (UniqueName: \"kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v\") pod \"80807b13-41b4-4c40-9acb-a84851f3595f\" (UID: \"80807b13-41b4-4c40-9acb-a84851f3595f\") " Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.922903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v" (OuterVolumeSpecName: "kube-api-access-7h84v") pod "80807b13-41b4-4c40-9acb-a84851f3595f" (UID: "80807b13-41b4-4c40-9acb-a84851f3595f"). InnerVolumeSpecName "kube-api-access-7h84v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.932885 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts" (OuterVolumeSpecName: "scripts") pod "80807b13-41b4-4c40-9acb-a84851f3595f" (UID: "80807b13-41b4-4c40-9acb-a84851f3595f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.953833 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80807b13-41b4-4c40-9acb-a84851f3595f" (UID: "80807b13-41b4-4c40-9acb-a84851f3595f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:07 crc kubenswrapper[4698]: I1014 10:17:07.985439 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data" (OuterVolumeSpecName: "config-data") pod "80807b13-41b4-4c40-9acb-a84851f3595f" (UID: "80807b13-41b4-4c40-9acb-a84851f3595f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.019970 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.020003 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.020013 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80807b13-41b4-4c40-9acb-a84851f3595f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.020025 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h84v\" (UniqueName: \"kubernetes.io/projected/80807b13-41b4-4c40-9acb-a84851f3595f-kube-api-access-7h84v\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:08 crc kubenswrapper[4698]: W1014 10:17:08.026127 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa1c19d3_dc91_4226_9267_8bebfeb5325c.slice/crio-9b5ebfaef9e9fdc0ef98b94e2b01ff3a8df31c15ebef0350663040477cddfbf8 WatchSource:0}: Error finding container 9b5ebfaef9e9fdc0ef98b94e2b01ff3a8df31c15ebef0350663040477cddfbf8: Status 404 returned error can't find the container with id 9b5ebfaef9e9fdc0ef98b94e2b01ff3a8df31c15ebef0350663040477cddfbf8 Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.027084 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.126049 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" event={"ID":"80807b13-41b4-4c40-9acb-a84851f3595f","Type":"ContainerDied","Data":"06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865"} Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.126096 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a7dfa616990041c62371a507469079b68044c138a6f1db431f63d5e28f8865" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.126170 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wlp" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.134138 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: E1014 10:17:08.134674 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="init" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.134694 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="init" Oct 14 10:17:08 crc kubenswrapper[4698]: E1014 10:17:08.134721 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd329-5063-4f5b-8903-02fdfce19aca" containerName="nova-manage" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.134727 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd329-5063-4f5b-8903-02fdfce19aca" containerName="nova-manage" Oct 14 10:17:08 crc kubenswrapper[4698]: E1014 10:17:08.134736 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80807b13-41b4-4c40-9acb-a84851f3595f" containerName="nova-cell1-conductor-db-sync" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.134742 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="80807b13-41b4-4c40-9acb-a84851f3595f" containerName="nova-cell1-conductor-db-sync" Oct 14 10:17:08 crc kubenswrapper[4698]: E1014 10:17:08.134785 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="dnsmasq-dns" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.134794 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="dnsmasq-dns" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.135007 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2cd329-5063-4f5b-8903-02fdfce19aca" containerName="nova-manage" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.135036 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" containerName="dnsmasq-dns" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.135054 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="80807b13-41b4-4c40-9acb-a84851f3595f" containerName="nova-cell1-conductor-db-sync" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.135928 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.137834 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xsm4" event={"ID":"cb2cd329-5063-4f5b-8903-02fdfce19aca","Type":"ContainerDied","Data":"06d63b45bbfed1d4fc9076f6f2c07f46647a916b96dde74012eb1adf1d6a4817"} Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.137876 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d63b45bbfed1d4fc9076f6f2c07f46647a916b96dde74012eb1adf1d6a4817" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.137969 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xsm4" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.141058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerStarted","Data":"9b5ebfaef9e9fdc0ef98b94e2b01ff3a8df31c15ebef0350663040477cddfbf8"} Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.154579 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.156608 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.156725 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-hbs4g" event={"ID":"a3ae3570-e56e-4c49-ad97-56e83b3f9d01","Type":"ContainerDied","Data":"8615d86778f8674450679f42b414dd4556bfeb521e23c6b895ee2e7000774ccf"} Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.156772 4698 scope.go:117] "RemoveContainer" containerID="63b423609078a3f518917b93787cc20d0955c23665e5aaa1cd8fb65bb08f8d5a" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.250722 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.253717 4698 scope.go:117] "RemoveContainer" containerID="065f36881f9a5193382c59d72d246a875b15685c3336208ee962e71910090f9e" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.259179 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-hbs4g"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.324941 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.325698 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-log" containerID="cri-o://d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb" gracePeriod=30 Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.325780 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-api" containerID="cri-o://3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674" gracePeriod=30 Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.337262 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.346711 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtr6z\" (UniqueName: \"kubernetes.io/projected/e4bcef82-1d46-45b0-b831-7c575c80b1f4-kube-api-access-qtr6z\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.347855 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.347924 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.368305 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.368604 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-log" containerID="cri-o://5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" gracePeriod=30 Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.368720 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-metadata" containerID="cri-o://f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" gracePeriod=30 Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.406734 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.406813 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.450082 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtr6z\" (UniqueName: \"kubernetes.io/projected/e4bcef82-1d46-45b0-b831-7c575c80b1f4-kube-api-access-qtr6z\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.450455 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.450549 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.456707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.458568 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bcef82-1d46-45b0-b831-7c575c80b1f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.468212 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtr6z\" (UniqueName: \"kubernetes.io/projected/e4bcef82-1d46-45b0-b831-7c575c80b1f4-kube-api-access-qtr6z\") pod \"nova-cell1-conductor-0\" (UID: \"e4bcef82-1d46-45b0-b831-7c575c80b1f4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:08 crc kubenswrapper[4698]: I1014 10:17:08.519102 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.037103 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ae3570-e56e-4c49-ad97-56e83b3f9d01" path="/var/lib/kubelet/pods/a3ae3570-e56e-4c49-ad97-56e83b3f9d01/volumes" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.041633 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab75fc7-2444-45a1-89c3-36aa14697520" path="/var/lib/kubelet/pods/dab75fc7-2444-45a1-89c3-36aa14697520/volumes" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.042786 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 10:17:09 crc kubenswrapper[4698]: W1014 10:17:09.068221 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4bcef82_1d46_45b0_b831_7c575c80b1f4.slice/crio-1d8a57289502bce637988b181f9177e61377b889fe63f7833bac1234dec9839c WatchSource:0}: Error finding container 1d8a57289502bce637988b181f9177e61377b889fe63f7833bac1234dec9839c: Status 404 returned error can't find the container with id 1d8a57289502bce637988b181f9177e61377b889fe63f7833bac1234dec9839c Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.083916 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172067 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle\") pod \"11e29685-d5d9-40ae-8715-6d7a75e252cc\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172295 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs\") pod \"11e29685-d5d9-40ae-8715-6d7a75e252cc\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172373 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data\") pod \"11e29685-d5d9-40ae-8715-6d7a75e252cc\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngtvf\" (UniqueName: \"kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf\") pod \"11e29685-d5d9-40ae-8715-6d7a75e252cc\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172567 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs\") pod \"11e29685-d5d9-40ae-8715-6d7a75e252cc\" (UID: \"11e29685-d5d9-40ae-8715-6d7a75e252cc\") " Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.172887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs" (OuterVolumeSpecName: "logs") pod "11e29685-d5d9-40ae-8715-6d7a75e252cc" (UID: "11e29685-d5d9-40ae-8715-6d7a75e252cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.173302 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e29685-d5d9-40ae-8715-6d7a75e252cc-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.179221 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e4bcef82-1d46-45b0-b831-7c575c80b1f4","Type":"ContainerStarted","Data":"1d8a57289502bce637988b181f9177e61377b889fe63f7833bac1234dec9839c"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.182984 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf" (OuterVolumeSpecName: "kube-api-access-ngtvf") pod "11e29685-d5d9-40ae-8715-6d7a75e252cc" (UID: "11e29685-d5d9-40ae-8715-6d7a75e252cc"). InnerVolumeSpecName "kube-api-access-ngtvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194469 4698 generic.go:334] "Generic (PLEG): container finished" podID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerID="f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" exitCode=0 Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194502 4698 generic.go:334] "Generic (PLEG): container finished" podID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerID="5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" exitCode=143 Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194539 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerDied","Data":"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194565 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerDied","Data":"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11e29685-d5d9-40ae-8715-6d7a75e252cc","Type":"ContainerDied","Data":"61203d04bdfe08a8faa01618f11641a71127f0139548ae7138e833a0b9be83b7"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194589 4698 scope.go:117] "RemoveContainer" containerID="f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.194696 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.198337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerStarted","Data":"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.201870 4698 generic.go:334] "Generic (PLEG): container finished" podID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerID="d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb" exitCode=143 Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.201961 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerDied","Data":"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb"} Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.202040 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerName="nova-scheduler-scheduler" containerID="cri-o://3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" gracePeriod=30 Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.222645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data" (OuterVolumeSpecName: "config-data") pod "11e29685-d5d9-40ae-8715-6d7a75e252cc" (UID: "11e29685-d5d9-40ae-8715-6d7a75e252cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.226509 4698 scope.go:117] "RemoveContainer" containerID="5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.227293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11e29685-d5d9-40ae-8715-6d7a75e252cc" (UID: "11e29685-d5d9-40ae-8715-6d7a75e252cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.249641 4698 scope.go:117] "RemoveContainer" containerID="f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" Oct 14 10:17:09 crc kubenswrapper[4698]: E1014 10:17:09.255155 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2\": container with ID starting with f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2 not found: ID does not exist" containerID="f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255191 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2"} err="failed to get container status \"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2\": rpc error: code = NotFound desc = could not find container \"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2\": container with ID starting with f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2 not found: ID does not exist" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255213 4698 scope.go:117] "RemoveContainer" containerID="5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" Oct 14 10:17:09 crc kubenswrapper[4698]: E1014 10:17:09.255494 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895\": container with ID starting with 5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895 not found: ID does not exist" containerID="5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255517 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895"} err="failed to get container status \"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895\": rpc error: code = NotFound desc = could not find container \"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895\": container with ID starting with 5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895 not found: ID does not exist" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255530 4698 scope.go:117] "RemoveContainer" containerID="f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255713 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2"} err="failed to get container status \"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2\": rpc error: code = NotFound desc = could not find container \"f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2\": container with ID starting with f311368470f6c6d685a689d6c19452d9ca60a550d21a7081f018ce6d3ae67de2 not found: ID does not exist" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.255733 4698 scope.go:117] "RemoveContainer" containerID="5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.256526 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895"} err="failed to get container status \"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895\": rpc error: code = NotFound desc = could not find container \"5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895\": container with ID starting with 5b2acaa3d8cce141915f03ba47fcdc2b586236e10428efa48718809475e3b895 not found: ID does not exist" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.271064 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "11e29685-d5d9-40ae-8715-6d7a75e252cc" (UID: "11e29685-d5d9-40ae-8715-6d7a75e252cc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.275345 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.275581 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.275591 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngtvf\" (UniqueName: \"kubernetes.io/projected/11e29685-d5d9-40ae-8715-6d7a75e252cc-kube-api-access-ngtvf\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.275601 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11e29685-d5d9-40ae-8715-6d7a75e252cc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.554885 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.565215 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.597337 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:09 crc kubenswrapper[4698]: E1014 10:17:09.597913 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-log" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.597935 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-log" Oct 14 10:17:09 crc kubenswrapper[4698]: E1014 10:17:09.597959 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-metadata" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.597967 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-metadata" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.598166 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-metadata" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.598195 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" containerName="nova-metadata-log" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.600059 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.603649 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.603915 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.610256 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.687916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.688160 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.688187 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.688238 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xbzk\" (UniqueName: \"kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.688545 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.791418 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.791747 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.791805 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xbzk\" (UniqueName: \"kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.791872 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.791906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.797327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.797671 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.797702 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.812782 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:09 crc kubenswrapper[4698]: I1014 10:17:09.817696 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xbzk\" (UniqueName: \"kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk\") pod \"nova-metadata-0\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " pod="openstack/nova-metadata-0" Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.051962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.232976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerStarted","Data":"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5"} Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.248791 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e4bcef82-1d46-45b0-b831-7c575c80b1f4","Type":"ContainerStarted","Data":"9b00753dc243d3354d6b405fdf6b08cfbb313508498291b885d1a57af0de4def"} Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.248939 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.273459 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.273436702 podStartE2EDuration="2.273436702s" podCreationTimestamp="2025-10-14 10:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:10.263607292 +0000 UTC m=+1211.960906718" watchObservedRunningTime="2025-10-14 10:17:10.273436702 +0000 UTC m=+1211.970736118" Oct 14 10:17:10 crc kubenswrapper[4698]: I1014 10:17:10.548229 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.031571 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e29685-d5d9-40ae-8715-6d7a75e252cc" path="/var/lib/kubelet/pods/11e29685-d5d9-40ae-8715-6d7a75e252cc/volumes" Oct 14 10:17:11 crc kubenswrapper[4698]: E1014 10:17:11.238784 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:11 crc kubenswrapper[4698]: E1014 10:17:11.245137 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:11 crc kubenswrapper[4698]: E1014 10:17:11.246919 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:11 crc kubenswrapper[4698]: E1014 10:17:11.247097 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerName="nova-scheduler-scheduler" Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.275857 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerStarted","Data":"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d"} Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.279272 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerStarted","Data":"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2"} Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.279327 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerStarted","Data":"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad"} Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.279341 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerStarted","Data":"22bc988ebdd52b6ef66e951182f192ca425b58e7532093db48b8cbcd97bc7303"} Oct 14 10:17:11 crc kubenswrapper[4698]: I1014 10:17:11.308296 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.308270554 podStartE2EDuration="2.308270554s" podCreationTimestamp="2025-10-14 10:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:11.295947684 +0000 UTC m=+1212.993247110" watchObservedRunningTime="2025-10-14 10:17:11.308270554 +0000 UTC m=+1213.005569970" Oct 14 10:17:12 crc kubenswrapper[4698]: I1014 10:17:12.294900 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerStarted","Data":"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839"} Oct 14 10:17:12 crc kubenswrapper[4698]: I1014 10:17:12.295562 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:17:12 crc kubenswrapper[4698]: I1014 10:17:12.326861 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6293576440000002 podStartE2EDuration="5.326842983s" podCreationTimestamp="2025-10-14 10:17:07 +0000 UTC" firstStartedPulling="2025-10-14 10:17:08.030020611 +0000 UTC m=+1209.727320027" lastFinishedPulling="2025-10-14 10:17:11.72750595 +0000 UTC m=+1213.424805366" observedRunningTime="2025-10-14 10:17:12.317201738 +0000 UTC m=+1214.014501154" watchObservedRunningTime="2025-10-14 10:17:12.326842983 +0000 UTC m=+1214.024142399" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.125451 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.170318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data\") pod \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.170401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767bl\" (UniqueName: \"kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl\") pod \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.170533 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle\") pod \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\" (UID: \"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7\") " Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.178924 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl" (OuterVolumeSpecName: "kube-api-access-767bl") pod "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" (UID: "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7"). InnerVolumeSpecName "kube-api-access-767bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.282513 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data" (OuterVolumeSpecName: "config-data") pod "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" (UID: "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.299034 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.299067 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767bl\" (UniqueName: \"kubernetes.io/projected/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-kube-api-access-767bl\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.331956 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" (UID: "79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.357083 4698 generic.go:334] "Generic (PLEG): container finished" podID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" exitCode=0 Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.358397 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.361100 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7","Type":"ContainerDied","Data":"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103"} Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.361154 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7","Type":"ContainerDied","Data":"2eccdcb9a37e4c7fb7b1d9389e08e8fb8cc4d04c53823d6f4c28140c73f7fc44"} Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.361171 4698 scope.go:117] "RemoveContainer" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.403957 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.407889 4698 scope.go:117] "RemoveContainer" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.409841 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:13 crc kubenswrapper[4698]: E1014 10:17:13.410213 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103\": container with ID starting with 3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103 not found: ID does not exist" containerID="3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.410244 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103"} err="failed to get container status \"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103\": rpc error: code = NotFound desc = could not find container \"3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103\": container with ID starting with 3a33c37b57376c7f584176ae16c52574e3c40928a164d83ea656e54a6693a103 not found: ID does not exist" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.426728 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.436927 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:13 crc kubenswrapper[4698]: E1014 10:17:13.437398 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerName="nova-scheduler-scheduler" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.437419 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerName="nova-scheduler-scheduler" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.437683 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" containerName="nova-scheduler-scheduler" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.438464 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.442716 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.446011 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.505984 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.506062 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86phm\" (UniqueName: \"kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.506101 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.608531 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.608668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86phm\" (UniqueName: \"kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.608702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.613355 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.613372 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.629442 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86phm\" (UniqueName: \"kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm\") pod \"nova-scheduler-0\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:13 crc kubenswrapper[4698]: I1014 10:17:13.766737 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.144460 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.223092 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data\") pod \"dd0bc5b3-8722-4208-937a-e3b676267e9a\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.223160 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle\") pod \"dd0bc5b3-8722-4208-937a-e3b676267e9a\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.223183 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs\") pod \"dd0bc5b3-8722-4208-937a-e3b676267e9a\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.223290 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65slt\" (UniqueName: \"kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt\") pod \"dd0bc5b3-8722-4208-937a-e3b676267e9a\" (UID: \"dd0bc5b3-8722-4208-937a-e3b676267e9a\") " Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.224111 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs" (OuterVolumeSpecName: "logs") pod "dd0bc5b3-8722-4208-937a-e3b676267e9a" (UID: "dd0bc5b3-8722-4208-937a-e3b676267e9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.233046 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt" (OuterVolumeSpecName: "kube-api-access-65slt") pod "dd0bc5b3-8722-4208-937a-e3b676267e9a" (UID: "dd0bc5b3-8722-4208-937a-e3b676267e9a"). InnerVolumeSpecName "kube-api-access-65slt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.262891 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd0bc5b3-8722-4208-937a-e3b676267e9a" (UID: "dd0bc5b3-8722-4208-937a-e3b676267e9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.266379 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data" (OuterVolumeSpecName: "config-data") pod "dd0bc5b3-8722-4208-937a-e3b676267e9a" (UID: "dd0bc5b3-8722-4208-937a-e3b676267e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.303151 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:14 crc kubenswrapper[4698]: W1014 10:17:14.316046 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca9c9b4_7969_459b_87e1_66966d94f354.slice/crio-0572dc402fd856f0e485dd60ea6a299672f188b5dcc721bc6a353c30c80dfb02 WatchSource:0}: Error finding container 0572dc402fd856f0e485dd60ea6a299672f188b5dcc721bc6a353c30c80dfb02: Status 404 returned error can't find the container with id 0572dc402fd856f0e485dd60ea6a299672f188b5dcc721bc6a353c30c80dfb02 Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.325395 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.325430 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc5b3-8722-4208-937a-e3b676267e9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.325444 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0bc5b3-8722-4208-937a-e3b676267e9a-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.325453 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65slt\" (UniqueName: \"kubernetes.io/projected/dd0bc5b3-8722-4208-937a-e3b676267e9a-kube-api-access-65slt\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.408936 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca9c9b4-7969-459b-87e1-66966d94f354","Type":"ContainerStarted","Data":"0572dc402fd856f0e485dd60ea6a299672f188b5dcc721bc6a353c30c80dfb02"} Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.414155 4698 generic.go:334] "Generic (PLEG): container finished" podID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerID="3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674" exitCode=0 Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.414349 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.415011 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerDied","Data":"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674"} Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.415036 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd0bc5b3-8722-4208-937a-e3b676267e9a","Type":"ContainerDied","Data":"7270b9ddd547b5ff9094fa46379fb8b17ae6faa9e7f3f1f866f979d471685d00"} Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.415053 4698 scope.go:117] "RemoveContainer" containerID="3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.463897 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.466257 4698 scope.go:117] "RemoveContainer" containerID="d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.474949 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.490941 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:14 crc kubenswrapper[4698]: E1014 10:17:14.491457 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-log" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.491476 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-log" Oct 14 10:17:14 crc kubenswrapper[4698]: E1014 10:17:14.491497 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-api" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.491503 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-api" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.491691 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-log" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.491718 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" containerName="nova-api-api" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.492875 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.496287 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.497986 4698 scope.go:117] "RemoveContainer" containerID="3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.498371 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.498579 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:14 crc kubenswrapper[4698]: E1014 10:17:14.503938 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674\": container with ID starting with 3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674 not found: ID does not exist" containerID="3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.503993 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674"} err="failed to get container status \"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674\": rpc error: code = NotFound desc = could not find container \"3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674\": container with ID starting with 3a2c5f4d1ccf972110797939d50dbd040f7f77f2fb2ec41556796cd3a456f674 not found: ID does not exist" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.504022 4698 scope.go:117] "RemoveContainer" containerID="d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb" Oct 14 10:17:14 crc kubenswrapper[4698]: E1014 10:17:14.504527 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb\": container with ID starting with d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb not found: ID does not exist" containerID="d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.504584 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb"} err="failed to get container status \"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb\": rpc error: code = NotFound desc = could not find container \"d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb\": container with ID starting with d858a79ca85125590dfb40bda35a1b1e2ff96dd31458af8988d5931c506f38eb not found: ID does not exist" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.538334 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.538395 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.538422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xrl\" (UniqueName: \"kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.538671 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.650481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.650532 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.650570 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xrl\" (UniqueName: \"kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.650810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.651170 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.656316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.656538 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.666406 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xrl\" (UniqueName: \"kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl\") pod \"nova-api-0\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " pod="openstack/nova-api-0" Oct 14 10:17:14 crc kubenswrapper[4698]: I1014 10:17:14.812294 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.030977 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7" path="/var/lib/kubelet/pods/79fd1ed1-67b9-4a55-906a-4ae0ec9a42c7/volumes" Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.032266 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0bc5b3-8722-4208-937a-e3b676267e9a" path="/var/lib/kubelet/pods/dd0bc5b3-8722-4208-937a-e3b676267e9a/volumes" Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.052707 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.052789 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.264666 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.426298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerStarted","Data":"b44fec9e5c40163fb18182f3e5ecf9dd6027ba1f6a91b35cc807601d4bb2e2c7"} Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.429219 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca9c9b4-7969-459b-87e1-66966d94f354","Type":"ContainerStarted","Data":"d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25"} Oct 14 10:17:15 crc kubenswrapper[4698]: I1014 10:17:15.451343 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.451324549 podStartE2EDuration="2.451324549s" podCreationTimestamp="2025-10-14 10:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:15.451235587 +0000 UTC m=+1217.148535013" watchObservedRunningTime="2025-10-14 10:17:15.451324549 +0000 UTC m=+1217.148623965" Oct 14 10:17:16 crc kubenswrapper[4698]: I1014 10:17:16.442571 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerStarted","Data":"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c"} Oct 14 10:17:16 crc kubenswrapper[4698]: I1014 10:17:16.442918 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerStarted","Data":"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee"} Oct 14 10:17:16 crc kubenswrapper[4698]: I1014 10:17:16.471012 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.470988569 podStartE2EDuration="2.470988569s" podCreationTimestamp="2025-10-14 10:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:16.4622503 +0000 UTC m=+1218.159549726" watchObservedRunningTime="2025-10-14 10:17:16.470988569 +0000 UTC m=+1218.168287985" Oct 14 10:17:18 crc kubenswrapper[4698]: I1014 10:17:18.571479 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 10:17:18 crc kubenswrapper[4698]: I1014 10:17:18.768380 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 10:17:20 crc kubenswrapper[4698]: I1014 10:17:20.052366 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 10:17:20 crc kubenswrapper[4698]: I1014 10:17:20.052845 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 10:17:21 crc kubenswrapper[4698]: I1014 10:17:21.081016 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:21 crc kubenswrapper[4698]: I1014 10:17:21.081042 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:23 crc kubenswrapper[4698]: I1014 10:17:23.766994 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 10:17:23 crc kubenswrapper[4698]: I1014 10:17:23.802882 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 10:17:23 crc kubenswrapper[4698]: I1014 10:17:23.908484 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:17:23 crc kubenswrapper[4698]: I1014 10:17:23.908844 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:17:24 crc kubenswrapper[4698]: I1014 10:17:24.574677 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 10:17:24 crc kubenswrapper[4698]: I1014 10:17:24.813530 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:17:24 crc kubenswrapper[4698]: I1014 10:17:24.813601 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:17:25 crc kubenswrapper[4698]: I1014 10:17:25.855163 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:25 crc kubenswrapper[4698]: I1014 10:17:25.855163 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 10:17:30 crc kubenswrapper[4698]: I1014 10:17:30.060509 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 10:17:30 crc kubenswrapper[4698]: I1014 10:17:30.068153 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 10:17:30 crc kubenswrapper[4698]: I1014 10:17:30.072942 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 10:17:30 crc kubenswrapper[4698]: I1014 10:17:30.605026 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.381241 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.514318 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle\") pod \"26c376ac-2053-48a2-8762-754477dfbdff\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.515230 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w5m6\" (UniqueName: \"kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6\") pod \"26c376ac-2053-48a2-8762-754477dfbdff\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.515293 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data\") pod \"26c376ac-2053-48a2-8762-754477dfbdff\" (UID: \"26c376ac-2053-48a2-8762-754477dfbdff\") " Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.528608 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6" (OuterVolumeSpecName: "kube-api-access-6w5m6") pod "26c376ac-2053-48a2-8762-754477dfbdff" (UID: "26c376ac-2053-48a2-8762-754477dfbdff"). InnerVolumeSpecName "kube-api-access-6w5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.552222 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data" (OuterVolumeSpecName: "config-data") pod "26c376ac-2053-48a2-8762-754477dfbdff" (UID: "26c376ac-2053-48a2-8762-754477dfbdff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.567695 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26c376ac-2053-48a2-8762-754477dfbdff" (UID: "26c376ac-2053-48a2-8762-754477dfbdff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.619198 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w5m6\" (UniqueName: \"kubernetes.io/projected/26c376ac-2053-48a2-8762-754477dfbdff-kube-api-access-6w5m6\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.619564 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.619574 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c376ac-2053-48a2-8762-754477dfbdff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.623810 4698 generic.go:334] "Generic (PLEG): container finished" podID="26c376ac-2053-48a2-8762-754477dfbdff" containerID="eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093" exitCode=137 Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.624296 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.625530 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26c376ac-2053-48a2-8762-754477dfbdff","Type":"ContainerDied","Data":"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093"} Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.625574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26c376ac-2053-48a2-8762-754477dfbdff","Type":"ContainerDied","Data":"609c1e43354225cbd2767fb012f7970aa59bb6e84158edc07113cbf0ce8231d4"} Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.625597 4698 scope.go:117] "RemoveContainer" containerID="eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.670131 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.673555 4698 scope.go:117] "RemoveContainer" containerID="eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093" Oct 14 10:17:32 crc kubenswrapper[4698]: E1014 10:17:32.674129 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093\": container with ID starting with eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093 not found: ID does not exist" containerID="eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.674179 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093"} err="failed to get container status \"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093\": rpc error: code = NotFound desc = could not find container \"eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093\": container with ID starting with eec8b575921cb9f43f824df6ae1f964e53cbf06046f6330271131cf196f5c093 not found: ID does not exist" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.688809 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.706123 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:32 crc kubenswrapper[4698]: E1014 10:17:32.706998 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c376ac-2053-48a2-8762-754477dfbdff" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.707034 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c376ac-2053-48a2-8762-754477dfbdff" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.707409 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c376ac-2053-48a2-8762-754477dfbdff" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.708661 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.710978 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.711729 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.712638 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.735753 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.824099 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qt2t\" (UniqueName: \"kubernetes.io/projected/f114cc4a-8234-441d-926f-83ac36f9ff5b-kube-api-access-5qt2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.824234 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.824306 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.824343 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.824811 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.926616 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.926689 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.926788 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.926814 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qt2t\" (UniqueName: \"kubernetes.io/projected/f114cc4a-8234-441d-926f-83ac36f9ff5b-kube-api-access-5qt2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.926857 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.932455 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.933454 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.934446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.935485 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114cc4a-8234-441d-926f-83ac36f9ff5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:32 crc kubenswrapper[4698]: I1014 10:17:32.947699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qt2t\" (UniqueName: \"kubernetes.io/projected/f114cc4a-8234-441d-926f-83ac36f9ff5b-kube-api-access-5qt2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"f114cc4a-8234-441d-926f-83ac36f9ff5b\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:33 crc kubenswrapper[4698]: I1014 10:17:33.032388 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:33 crc kubenswrapper[4698]: I1014 10:17:33.039917 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c376ac-2053-48a2-8762-754477dfbdff" path="/var/lib/kubelet/pods/26c376ac-2053-48a2-8762-754477dfbdff/volumes" Oct 14 10:17:33 crc kubenswrapper[4698]: W1014 10:17:33.581723 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf114cc4a_8234_441d_926f_83ac36f9ff5b.slice/crio-92926fba16b7cc9ce4795daaa36c0a5c51c7cdfa9721ffcc6f754121a2adfef2 WatchSource:0}: Error finding container 92926fba16b7cc9ce4795daaa36c0a5c51c7cdfa9721ffcc6f754121a2adfef2: Status 404 returned error can't find the container with id 92926fba16b7cc9ce4795daaa36c0a5c51c7cdfa9721ffcc6f754121a2adfef2 Oct 14 10:17:33 crc kubenswrapper[4698]: I1014 10:17:33.584552 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 10:17:33 crc kubenswrapper[4698]: I1014 10:17:33.638227 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f114cc4a-8234-441d-926f-83ac36f9ff5b","Type":"ContainerStarted","Data":"92926fba16b7cc9ce4795daaa36c0a5c51c7cdfa9721ffcc6f754121a2adfef2"} Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.650981 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f114cc4a-8234-441d-926f-83ac36f9ff5b","Type":"ContainerStarted","Data":"c75a6c5840c2169d6af3257e2356389a4e42053ff32d72e5d4569692cdb47c23"} Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.686755 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.686732451 podStartE2EDuration="2.686732451s" podCreationTimestamp="2025-10-14 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:34.68387326 +0000 UTC m=+1236.381172686" watchObservedRunningTime="2025-10-14 10:17:34.686732451 +0000 UTC m=+1236.384031867" Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.818325 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.819934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.821704 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 10:17:34 crc kubenswrapper[4698]: I1014 10:17:34.821844 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.660095 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.663701 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.910045 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.912367 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.946728 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996126 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996551 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996611 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996678 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:35 crc kubenswrapper[4698]: I1014 10:17:35.996696 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098213 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098265 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098318 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098362 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098428 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.098470 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.099467 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.099659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.099676 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.099954 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.100148 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.118672 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9\") pod \"dnsmasq-dns-6559f4fbd7-rflwl\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.250550 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:36 crc kubenswrapper[4698]: I1014 10:17:36.747687 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:17:36 crc kubenswrapper[4698]: W1014 10:17:36.749952 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b55aeb5_b167_40ac_8e38_f4acf42352ef.slice/crio-a0299987b07aab6248bc9c9906e19eea7c3db0b3014abe1b7768f880db3a15a9 WatchSource:0}: Error finding container a0299987b07aab6248bc9c9906e19eea7c3db0b3014abe1b7768f880db3a15a9: Status 404 returned error can't find the container with id a0299987b07aab6248bc9c9906e19eea7c3db0b3014abe1b7768f880db3a15a9 Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.526845 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.683234 4698 generic.go:334] "Generic (PLEG): container finished" podID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerID="6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595" exitCode=0 Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.683495 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" event={"ID":"6b55aeb5-b167-40ac-8e38-f4acf42352ef","Type":"ContainerDied","Data":"6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595"} Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.683543 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" event={"ID":"6b55aeb5-b167-40ac-8e38-f4acf42352ef","Type":"ContainerStarted","Data":"a0299987b07aab6248bc9c9906e19eea7c3db0b3014abe1b7768f880db3a15a9"} Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.916908 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.917239 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-central-agent" containerID="cri-o://308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4" gracePeriod=30 Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.917276 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="proxy-httpd" containerID="cri-o://c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839" gracePeriod=30 Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.917318 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="sg-core" containerID="cri-o://77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d" gracePeriod=30 Oct 14 10:17:37 crc kubenswrapper[4698]: I1014 10:17:37.917328 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-notification-agent" containerID="cri-o://da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5" gracePeriod=30 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.033041 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.403919 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.696512 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" event={"ID":"6b55aeb5-b167-40ac-8e38-f4acf42352ef","Type":"ContainerStarted","Data":"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de"} Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.696713 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700176 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerID="c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839" exitCode=0 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700216 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerID="77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d" exitCode=2 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700227 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerID="308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4" exitCode=0 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerDied","Data":"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839"} Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700378 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerDied","Data":"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d"} Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerDied","Data":"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4"} Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700432 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-log" containerID="cri-o://d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee" gracePeriod=30 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.700499 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-api" containerID="cri-o://e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c" gracePeriod=30 Oct 14 10:17:38 crc kubenswrapper[4698]: I1014 10:17:38.724918 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" podStartSLOduration=3.72489202 podStartE2EDuration="3.72489202s" podCreationTimestamp="2025-10-14 10:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:38.717009345 +0000 UTC m=+1240.414308781" watchObservedRunningTime="2025-10-14 10:17:38.72489202 +0000 UTC m=+1240.422191446" Oct 14 10:17:39 crc kubenswrapper[4698]: I1014 10:17:39.710879 4698 generic.go:334] "Generic (PLEG): container finished" podID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerID="d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee" exitCode=143 Oct 14 10:17:39 crc kubenswrapper[4698]: I1014 10:17:39.710939 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerDied","Data":"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee"} Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.648725 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732425 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732545 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nj74\" (UniqueName: \"kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732662 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732683 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732712 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.732945 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.733162 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.734861 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.735118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.735165 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data\") pod \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\" (UID: \"aa1c19d3-dc91-4226-9267-8bebfeb5325c\") " Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.736413 4698 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.736430 4698 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1c19d3-dc91-4226-9267-8bebfeb5325c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740185 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerID="da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5" exitCode=0 Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740233 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerDied","Data":"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5"} Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740269 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1c19d3-dc91-4226-9267-8bebfeb5325c","Type":"ContainerDied","Data":"9b5ebfaef9e9fdc0ef98b94e2b01ff3a8df31c15ebef0350663040477cddfbf8"} Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740288 4698 scope.go:117] "RemoveContainer" containerID="c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740386 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74" (OuterVolumeSpecName: "kube-api-access-7nj74") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "kube-api-access-7nj74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.740446 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.741682 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts" (OuterVolumeSpecName: "scripts") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.770982 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.805405 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.833247 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.838203 4698 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.838233 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nj74\" (UniqueName: \"kubernetes.io/projected/aa1c19d3-dc91-4226-9267-8bebfeb5325c-kube-api-access-7nj74\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.838245 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.838254 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.838262 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.853960 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data" (OuterVolumeSpecName: "config-data") pod "aa1c19d3-dc91-4226-9267-8bebfeb5325c" (UID: "aa1c19d3-dc91-4226-9267-8bebfeb5325c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.856658 4698 scope.go:117] "RemoveContainer" containerID="77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.940744 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1c19d3-dc91-4226-9267-8bebfeb5325c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.943051 4698 scope.go:117] "RemoveContainer" containerID="da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5" Oct 14 10:17:41 crc kubenswrapper[4698]: I1014 10:17:41.977872 4698 scope.go:117] "RemoveContainer" containerID="308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.011475 4698 scope.go:117] "RemoveContainer" containerID="c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.011993 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839\": container with ID starting with c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839 not found: ID does not exist" containerID="c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012029 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839"} err="failed to get container status \"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839\": rpc error: code = NotFound desc = could not find container \"c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839\": container with ID starting with c3f0570190b0bb4c7fd9b59341e2e2d809c4ea528d5264d93a0c779500067839 not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012051 4698 scope.go:117] "RemoveContainer" containerID="77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.012570 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d\": container with ID starting with 77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d not found: ID does not exist" containerID="77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012608 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d"} err="failed to get container status \"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d\": rpc error: code = NotFound desc = could not find container \"77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d\": container with ID starting with 77f6da376763b4ec53309eb5bb30efab4557b289b0f66326fa424c0e5b46c53d not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012620 4698 scope.go:117] "RemoveContainer" containerID="da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.012890 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5\": container with ID starting with da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5 not found: ID does not exist" containerID="da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012915 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5"} err="failed to get container status \"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5\": rpc error: code = NotFound desc = could not find container \"da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5\": container with ID starting with da089eabfa61bede962c3ef889dddc8b452d9d1673a1bdc9c61c67ce4b933fb5 not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.012932 4698 scope.go:117] "RemoveContainer" containerID="308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.013239 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4\": container with ID starting with 308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4 not found: ID does not exist" containerID="308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.013293 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4"} err="failed to get container status \"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4\": rpc error: code = NotFound desc = could not find container \"308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4\": container with ID starting with 308fa651c565b74ee1558a6de54d5d2bb3d29a3deb1747aead3492cdf133c5c4 not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.090788 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.114007 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.129476 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.130031 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="proxy-httpd" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130049 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="proxy-httpd" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.130061 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-central-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130068 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-central-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.130094 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="sg-core" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130100 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="sg-core" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.130115 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-notification-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130121 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-notification-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130330 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="sg-core" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130348 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-central-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130361 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="proxy-httpd" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.130377 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" containerName="ceilometer-notification-agent" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.132414 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.135128 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.135895 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.136015 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.144325 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.247383 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.247429 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkcn\" (UniqueName: \"kubernetes.io/projected/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-kube-api-access-8tkcn\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.247722 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-log-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.247937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-config-data\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.248216 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-scripts\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.248445 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.248469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-run-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.248495 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.308854 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351149 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351214 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tkcn\" (UniqueName: \"kubernetes.io/projected/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-kube-api-access-8tkcn\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-log-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351331 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-config-data\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351383 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-scripts\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351472 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-run-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.351514 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.352938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-run-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.353021 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-log-httpd\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.360979 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.364055 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-config-data\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.375605 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-scripts\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.377959 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.380401 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tkcn\" (UniqueName: \"kubernetes.io/projected/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-kube-api-access-8tkcn\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.384442 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea396a85-5a42-41f7-a75c-1aca7fc4dd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea396a85-5a42-41f7-a75c-1aca7fc4dd37\") " pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.452490 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle\") pod \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.453631 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs\") pod \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.453718 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data\") pod \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.453800 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6xrl\" (UniqueName: \"kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl\") pod \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\" (UID: \"abbde2a0-bf38-441b-bac9-e9e3efe41cf2\") " Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.454942 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs" (OuterVolumeSpecName: "logs") pod "abbde2a0-bf38-441b-bac9-e9e3efe41cf2" (UID: "abbde2a0-bf38-441b-bac9-e9e3efe41cf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.457687 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl" (OuterVolumeSpecName: "kube-api-access-k6xrl") pod "abbde2a0-bf38-441b-bac9-e9e3efe41cf2" (UID: "abbde2a0-bf38-441b-bac9-e9e3efe41cf2"). InnerVolumeSpecName "kube-api-access-k6xrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.459339 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.487054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data" (OuterVolumeSpecName: "config-data") pod "abbde2a0-bf38-441b-bac9-e9e3efe41cf2" (UID: "abbde2a0-bf38-441b-bac9-e9e3efe41cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.490983 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbde2a0-bf38-441b-bac9-e9e3efe41cf2" (UID: "abbde2a0-bf38-441b-bac9-e9e3efe41cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.556369 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.556397 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.556407 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6xrl\" (UniqueName: \"kubernetes.io/projected/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-kube-api-access-k6xrl\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.556418 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbde2a0-bf38-441b-bac9-e9e3efe41cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.758433 4698 generic.go:334] "Generic (PLEG): container finished" podID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerID="e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c" exitCode=0 Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.758807 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerDied","Data":"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c"} Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.758857 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abbde2a0-bf38-441b-bac9-e9e3efe41cf2","Type":"ContainerDied","Data":"b44fec9e5c40163fb18182f3e5ecf9dd6027ba1f6a91b35cc807601d4bb2e2c7"} Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.758879 4698 scope.go:117] "RemoveContainer" containerID="e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.759016 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.796707 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.796997 4698 scope.go:117] "RemoveContainer" containerID="d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.811927 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.822426 4698 scope.go:117] "RemoveContainer" containerID="e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.822927 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c\": container with ID starting with e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c not found: ID does not exist" containerID="e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.822963 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c"} err="failed to get container status \"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c\": rpc error: code = NotFound desc = could not find container \"e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c\": container with ID starting with e84dca44b2b22b3ddb5080e8b8be59fc55a030f67609980bcd2e7dbd0f8e437c not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.822986 4698 scope.go:117] "RemoveContainer" containerID="d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.824146 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee\": container with ID starting with d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee not found: ID does not exist" containerID="d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.824173 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee"} err="failed to get container status \"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee\": rpc error: code = NotFound desc = could not find container \"d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee\": container with ID starting with d47789081b487177c263a3ae323f29c1994eacc177751f75a2e8e2e7751a33ee not found: ID does not exist" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.824345 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.824874 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-log" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.824895 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-log" Oct 14 10:17:42 crc kubenswrapper[4698]: E1014 10:17:42.824950 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-api" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.824958 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-api" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.825170 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-api" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.825191 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" containerName="nova-api-log" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.826278 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.829608 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.829803 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.834392 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.842223 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.906723 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968281 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968346 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968369 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968460 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdpk\" (UniqueName: \"kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968516 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:42 crc kubenswrapper[4698]: I1014 10:17:42.968572 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.038054 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1c19d3-dc91-4226-9267-8bebfeb5325c" path="/var/lib/kubelet/pods/aa1c19d3-dc91-4226-9267-8bebfeb5325c/volumes" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.038967 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbde2a0-bf38-441b-bac9-e9e3efe41cf2" path="/var/lib/kubelet/pods/abbde2a0-bf38-441b-bac9-e9e3efe41cf2/volumes" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.039573 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.057012 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.070712 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.070834 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.070913 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.070980 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdpk\" (UniqueName: \"kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.071053 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.071089 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.071663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.075492 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.076337 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.077477 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.078737 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.095615 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdpk\" (UniqueName: \"kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk\") pod \"nova-api-0\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.165267 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.664636 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:43 crc kubenswrapper[4698]: W1014 10:17:43.666739 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dd9877c_3870_463d_bd81_1dd384c0e7ae.slice/crio-36bfdca57b93f64a7b39fc8e5ddf02d6efb234ae7dea12fb1a9b6804ef7cfc57 WatchSource:0}: Error finding container 36bfdca57b93f64a7b39fc8e5ddf02d6efb234ae7dea12fb1a9b6804ef7cfc57: Status 404 returned error can't find the container with id 36bfdca57b93f64a7b39fc8e5ddf02d6efb234ae7dea12fb1a9b6804ef7cfc57 Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.770387 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerStarted","Data":"36bfdca57b93f64a7b39fc8e5ddf02d6efb234ae7dea12fb1a9b6804ef7cfc57"} Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.775140 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea396a85-5a42-41f7-a75c-1aca7fc4dd37","Type":"ContainerStarted","Data":"f6dc35fd792ffdca4e9a75c96f2d4d4f58c84b6f13ca8b14771ccdb328f5ca1c"} Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.775175 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea396a85-5a42-41f7-a75c-1aca7fc4dd37","Type":"ContainerStarted","Data":"ff0cf04f05e952073feb047c8d493922ee8ecebe719157a5f47c89c0be43b83d"} Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.795433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.952397 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xhnxp"] Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.953913 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.956137 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.957582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 14 10:17:43 crc kubenswrapper[4698]: I1014 10:17:43.962401 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xhnxp"] Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.124603 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcbt\" (UniqueName: \"kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.125052 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.125883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.125937 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.228627 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcbt\" (UniqueName: \"kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.228690 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.228757 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.228779 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.233587 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.234341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.238671 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.245136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcbt\" (UniqueName: \"kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt\") pod \"nova-cell1-cell-mapping-xhnxp\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:44 crc kubenswrapper[4698]: I1014 10:17:44.288180 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:44.801905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea396a85-5a42-41f7-a75c-1aca7fc4dd37","Type":"ContainerStarted","Data":"ee1c595ad248ef18bc2e6e39e4618e5d47cb94c0d62f6470d28084e3c4a036ca"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:44.804975 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerStarted","Data":"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:44.805020 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerStarted","Data":"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:44.838401 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.838381744 podStartE2EDuration="2.838381744s" podCreationTimestamp="2025-10-14 10:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:44.830573812 +0000 UTC m=+1246.527873248" watchObservedRunningTime="2025-10-14 10:17:44.838381744 +0000 UTC m=+1246.535681150" Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:45.526240 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xhnxp"] Oct 14 10:17:45 crc kubenswrapper[4698]: W1014 10:17:45.537201 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9d031c_8581_44fa_b804_fbbc75403d88.slice/crio-afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348 WatchSource:0}: Error finding container afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348: Status 404 returned error can't find the container with id afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348 Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:45.821202 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xhnxp" event={"ID":"cd9d031c-8581-44fa-b804-fbbc75403d88","Type":"ContainerStarted","Data":"019be955801c29102debe6e56c82e001c62e15b86355620f3e70148397d1e2a4"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:45.821499 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xhnxp" event={"ID":"cd9d031c-8581-44fa-b804-fbbc75403d88","Type":"ContainerStarted","Data":"afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:45.827811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea396a85-5a42-41f7-a75c-1aca7fc4dd37","Type":"ContainerStarted","Data":"6e175c5c1502bfb4ac7cedb6b7ab973e6539bca55869509e03b14f46233ea05a"} Oct 14 10:17:45 crc kubenswrapper[4698]: I1014 10:17:45.846472 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xhnxp" podStartSLOduration=2.846446024 podStartE2EDuration="2.846446024s" podCreationTimestamp="2025-10-14 10:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:45.837414397 +0000 UTC m=+1247.534713823" watchObservedRunningTime="2025-10-14 10:17:45.846446024 +0000 UTC m=+1247.543745440" Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.253119 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.321372 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.321829 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="dnsmasq-dns" containerID="cri-o://a4dd8177274e715b4a3488d7c4166628c3a6b059a00114fa66bd797cbb6e97b5" gracePeriod=10 Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.552052 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.205:5353: connect: connection refused" Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.841447 4698 generic.go:334] "Generic (PLEG): container finished" podID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerID="a4dd8177274e715b4a3488d7c4166628c3a6b059a00114fa66bd797cbb6e97b5" exitCode=0 Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.842536 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" event={"ID":"1091b708-e8d5-47b7-bc52-dfa7bf55e441","Type":"ContainerDied","Data":"a4dd8177274e715b4a3488d7c4166628c3a6b059a00114fa66bd797cbb6e97b5"} Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.842574 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" event={"ID":"1091b708-e8d5-47b7-bc52-dfa7bf55e441","Type":"ContainerDied","Data":"a9638fb0a356265e3e504e4fc2b9a3825bc76b1ccfa38496264f088c5097701a"} Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.842586 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9638fb0a356265e3e504e4fc2b9a3825bc76b1ccfa38496264f088c5097701a" Oct 14 10:17:46 crc kubenswrapper[4698]: I1014 10:17:46.848261 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005420 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005617 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqkr\" (UniqueName: \"kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005689 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005870 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005903 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.005920 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb\") pod \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\" (UID: \"1091b708-e8d5-47b7-bc52-dfa7bf55e441\") " Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.010385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr" (OuterVolumeSpecName: "kube-api-access-bvqkr") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "kube-api-access-bvqkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.071686 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config" (OuterVolumeSpecName: "config") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.074106 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.087141 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.088864 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.092458 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1091b708-e8d5-47b7-bc52-dfa7bf55e441" (UID: "1091b708-e8d5-47b7-bc52-dfa7bf55e441"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109638 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109723 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109734 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109746 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109757 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1091b708-e8d5-47b7-bc52-dfa7bf55e441-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.109794 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqkr\" (UniqueName: \"kubernetes.io/projected/1091b708-e8d5-47b7-bc52-dfa7bf55e441-kube-api-access-bvqkr\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.854053 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea396a85-5a42-41f7-a75c-1aca7fc4dd37","Type":"ContainerStarted","Data":"747e059a6722887b1798b61b1cc8274480b3021e54d28196b9867d106bc71d3f"} Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.854404 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.854065 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.890348 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.197718354 podStartE2EDuration="5.890326965s" podCreationTimestamp="2025-10-14 10:17:42 +0000 UTC" firstStartedPulling="2025-10-14 10:17:42.908041016 +0000 UTC m=+1244.605340432" lastFinishedPulling="2025-10-14 10:17:46.600649627 +0000 UTC m=+1248.297949043" observedRunningTime="2025-10-14 10:17:47.881127884 +0000 UTC m=+1249.578427300" watchObservedRunningTime="2025-10-14 10:17:47.890326965 +0000 UTC m=+1249.587626381" Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.909598 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:17:47 crc kubenswrapper[4698]: I1014 10:17:47.917826 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-r5hn4"] Oct 14 10:17:49 crc kubenswrapper[4698]: I1014 10:17:49.030715 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" path="/var/lib/kubelet/pods/1091b708-e8d5-47b7-bc52-dfa7bf55e441/volumes" Oct 14 10:17:50 crc kubenswrapper[4698]: I1014 10:17:50.887991 4698 generic.go:334] "Generic (PLEG): container finished" podID="cd9d031c-8581-44fa-b804-fbbc75403d88" containerID="019be955801c29102debe6e56c82e001c62e15b86355620f3e70148397d1e2a4" exitCode=0 Oct 14 10:17:50 crc kubenswrapper[4698]: I1014 10:17:50.888367 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xhnxp" event={"ID":"cd9d031c-8581-44fa-b804-fbbc75403d88","Type":"ContainerDied","Data":"019be955801c29102debe6e56c82e001c62e15b86355620f3e70148397d1e2a4"} Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.250307 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.334636 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcbt\" (UniqueName: \"kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt\") pod \"cd9d031c-8581-44fa-b804-fbbc75403d88\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.334799 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle\") pod \"cd9d031c-8581-44fa-b804-fbbc75403d88\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.334957 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts\") pod \"cd9d031c-8581-44fa-b804-fbbc75403d88\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.335027 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data\") pod \"cd9d031c-8581-44fa-b804-fbbc75403d88\" (UID: \"cd9d031c-8581-44fa-b804-fbbc75403d88\") " Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.340029 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts" (OuterVolumeSpecName: "scripts") pod "cd9d031c-8581-44fa-b804-fbbc75403d88" (UID: "cd9d031c-8581-44fa-b804-fbbc75403d88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.341058 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt" (OuterVolumeSpecName: "kube-api-access-jwcbt") pod "cd9d031c-8581-44fa-b804-fbbc75403d88" (UID: "cd9d031c-8581-44fa-b804-fbbc75403d88"). InnerVolumeSpecName "kube-api-access-jwcbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.365031 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data" (OuterVolumeSpecName: "config-data") pod "cd9d031c-8581-44fa-b804-fbbc75403d88" (UID: "cd9d031c-8581-44fa-b804-fbbc75403d88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.374109 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd9d031c-8581-44fa-b804-fbbc75403d88" (UID: "cd9d031c-8581-44fa-b804-fbbc75403d88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.437829 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.437866 4698 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.437876 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d031c-8581-44fa-b804-fbbc75403d88-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.437885 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcbt\" (UniqueName: \"kubernetes.io/projected/cd9d031c-8581-44fa-b804-fbbc75403d88-kube-api-access-jwcbt\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.910030 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xhnxp" event={"ID":"cd9d031c-8581-44fa-b804-fbbc75403d88","Type":"ContainerDied","Data":"afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348"} Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.910358 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd1520d5b4b98366119c974dfd7fc8538483c2ed60820039918a06d03137348" Oct 14 10:17:52 crc kubenswrapper[4698]: I1014 10:17:52.910109 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xhnxp" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.121180 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.121442 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" containerName="nova-scheduler-scheduler" containerID="cri-o://d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" gracePeriod=30 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.136436 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.136820 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-log" containerID="cri-o://bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" gracePeriod=30 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.136931 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-api" containerID="cri-o://a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" gracePeriod=30 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.150663 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.150926 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" containerID="cri-o://3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad" gracePeriod=30 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.151081 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" containerID="cri-o://f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2" gracePeriod=30 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.739318 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.769386 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.770886 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.772103 4698 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.772133 4698 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" containerName="nova-scheduler-scheduler" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.870839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfdpk\" (UniqueName: \"kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.870956 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.870985 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.871121 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.871226 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.871280 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.872044 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs" (OuterVolumeSpecName: "logs") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.880629 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk" (OuterVolumeSpecName: "kube-api-access-qfdpk") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "kube-api-access-qfdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.902657 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.905383 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data" (OuterVolumeSpecName: "config-data") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.908493 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.908546 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.908593 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.909659 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.909737 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b" gracePeriod=600 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.922835 4698 generic.go:334] "Generic (PLEG): container finished" podID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerID="3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad" exitCode=143 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.922906 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerDied","Data":"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad"} Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925802 4698 generic.go:334] "Generic (PLEG): container finished" podID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerID="a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" exitCode=0 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925816 4698 generic.go:334] "Generic (PLEG): container finished" podID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerID="bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" exitCode=143 Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925831 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerDied","Data":"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c"} Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerDied","Data":"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365"} Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925856 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5dd9877c-3870-463d-bd81-1dd384c0e7ae","Type":"ContainerDied","Data":"36bfdca57b93f64a7b39fc8e5ddf02d6efb234ae7dea12fb1a9b6804ef7cfc57"} Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925872 4698 scope.go:117] "RemoveContainer" containerID="a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.925968 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.949152 4698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs podName:5dd9877c-3870-463d-bd81-1dd384c0e7ae nodeName:}" failed. No retries permitted until 2025-10-14 10:17:54.449120882 +0000 UTC m=+1256.146420298 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae") : error deleting /var/lib/kubelet/pods/5dd9877c-3870-463d-bd81-1dd384c0e7ae/volume-subpaths: remove /var/lib/kubelet/pods/5dd9877c-3870-463d-bd81-1dd384c0e7ae/volume-subpaths: no such file or directory Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.960145 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.960415 4698 scope.go:117] "RemoveContainer" containerID="bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.974252 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9877c-3870-463d-bd81-1dd384c0e7ae-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.974281 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfdpk\" (UniqueName: \"kubernetes.io/projected/5dd9877c-3870-463d-bd81-1dd384c0e7ae-kube-api-access-qfdpk\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.974292 4698 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.974304 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.974313 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.982057 4698 scope.go:117] "RemoveContainer" containerID="a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.982523 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c\": container with ID starting with a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c not found: ID does not exist" containerID="a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.982570 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c"} err="failed to get container status \"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c\": rpc error: code = NotFound desc = could not find container \"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c\": container with ID starting with a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c not found: ID does not exist" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.982597 4698 scope.go:117] "RemoveContainer" containerID="bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" Oct 14 10:17:53 crc kubenswrapper[4698]: E1014 10:17:53.983175 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365\": container with ID starting with bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365 not found: ID does not exist" containerID="bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.983199 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365"} err="failed to get container status \"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365\": rpc error: code = NotFound desc = could not find container \"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365\": container with ID starting with bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365 not found: ID does not exist" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.983215 4698 scope.go:117] "RemoveContainer" containerID="a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.984494 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c"} err="failed to get container status \"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c\": rpc error: code = NotFound desc = could not find container \"a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c\": container with ID starting with a183001cd25ecc34af3a8850597c2e49ecaafdbab7aa29e8389f1f15dc17bd3c not found: ID does not exist" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.984535 4698 scope.go:117] "RemoveContainer" containerID="bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365" Oct 14 10:17:53 crc kubenswrapper[4698]: I1014 10:17:53.984836 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365"} err="failed to get container status \"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365\": rpc error: code = NotFound desc = could not find container \"bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365\": container with ID starting with bf9b6a985ac5cf19b89d1c2eb6c93d6828fbb783b5e02d02019208e06e487365 not found: ID does not exist" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.485948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") pod \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\" (UID: \"5dd9877c-3870-463d-bd81-1dd384c0e7ae\") " Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.492158 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5dd9877c-3870-463d-bd81-1dd384c0e7ae" (UID: "5dd9877c-3870-463d-bd81-1dd384c0e7ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.581612 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.589201 4698 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd9877c-3870-463d-bd81-1dd384c0e7ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.595859 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.607529 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:54 crc kubenswrapper[4698]: E1014 10:17:54.608225 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="init" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608257 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="init" Oct 14 10:17:54 crc kubenswrapper[4698]: E1014 10:17:54.608296 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-log" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608306 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-log" Oct 14 10:17:54 crc kubenswrapper[4698]: E1014 10:17:54.608322 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d031c-8581-44fa-b804-fbbc75403d88" containerName="nova-manage" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608330 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d031c-8581-44fa-b804-fbbc75403d88" containerName="nova-manage" Oct 14 10:17:54 crc kubenswrapper[4698]: E1014 10:17:54.608365 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="dnsmasq-dns" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608376 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="dnsmasq-dns" Oct 14 10:17:54 crc kubenswrapper[4698]: E1014 10:17:54.608403 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-api" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608411 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-api" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608677 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1091b708-e8d5-47b7-bc52-dfa7bf55e441" containerName="dnsmasq-dns" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608710 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-api" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608723 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" containerName="nova-api-log" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.608738 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d031c-8581-44fa-b804-fbbc75403d88" containerName="nova-manage" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.610303 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.614216 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.616554 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.616848 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.623159 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.691614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066120b9-3158-4234-873d-178f6b65885c-logs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.691709 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-config-data\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.691883 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.691920 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.692015 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-public-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.692042 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zp9s\" (UniqueName: \"kubernetes.io/projected/066120b9-3158-4234-873d-178f6b65885c-kube-api-access-6zp9s\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.794285 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.795666 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.795868 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-public-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.796068 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zp9s\" (UniqueName: \"kubernetes.io/projected/066120b9-3158-4234-873d-178f6b65885c-kube-api-access-6zp9s\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.796323 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066120b9-3158-4234-873d-178f6b65885c-logs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.796564 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-config-data\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.797824 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066120b9-3158-4234-873d-178f6b65885c-logs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.799572 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.801991 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-config-data\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.808396 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-public-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.810594 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/066120b9-3158-4234-873d-178f6b65885c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.822676 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zp9s\" (UniqueName: \"kubernetes.io/projected/066120b9-3158-4234-873d-178f6b65885c-kube-api-access-6zp9s\") pod \"nova-api-0\" (UID: \"066120b9-3158-4234-873d-178f6b65885c\") " pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.930938 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.939738 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b" exitCode=0 Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.939828 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b"} Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.939913 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a"} Oct 14 10:17:54 crc kubenswrapper[4698]: I1014 10:17:54.939942 4698 scope.go:117] "RemoveContainer" containerID="7096d53cbfbfab54f87b9b6c9da1611d27bf89715408c9583f5d8cbefe8b54b2" Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.034965 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd9877c-3870-463d-bd81-1dd384c0e7ae" path="/var/lib/kubelet/pods/5dd9877c-3870-463d-bd81-1dd384c0e7ae/volumes" Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.429316 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.960984 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066120b9-3158-4234-873d-178f6b65885c","Type":"ContainerStarted","Data":"975a6b489fa8ce0c92e8e3a07b99fdbbe985f30abe46d53532fc1ace97a77096"} Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.961534 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066120b9-3158-4234-873d-178f6b65885c","Type":"ContainerStarted","Data":"6a8b77c86788b66058a389d74cc8df4b0984e67d796f4e486889e220edcbf103"} Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.961562 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"066120b9-3158-4234-873d-178f6b65885c","Type":"ContainerStarted","Data":"f65e0a0e5724a458a5d341e438cb4c088c7f150e8110dfcb5104084a97193eda"} Oct 14 10:17:55 crc kubenswrapper[4698]: I1014 10:17:55.998437 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.998418446 podStartE2EDuration="1.998418446s" podCreationTimestamp="2025-10-14 10:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:55.991284723 +0000 UTC m=+1257.688584149" watchObservedRunningTime="2025-10-14 10:17:55.998418446 +0000 UTC m=+1257.695717862" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.313733 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:34632->10.217.0.212:8775: read: connection reset by peer" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.313743 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:34648->10.217.0.212:8775: read: connection reset by peer" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.792517 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.850115 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs\") pod \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.850208 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xbzk\" (UniqueName: \"kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk\") pod \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.850256 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data\") pod \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.850301 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle\") pod \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.850440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs\") pod \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\" (UID: \"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e\") " Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.851710 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs" (OuterVolumeSpecName: "logs") pod "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" (UID: "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.870333 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk" (OuterVolumeSpecName: "kube-api-access-7xbzk") pod "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" (UID: "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e"). InnerVolumeSpecName "kube-api-access-7xbzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.907801 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" (UID: "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.927027 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data" (OuterVolumeSpecName: "config-data") pod "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" (UID: "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.953360 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xbzk\" (UniqueName: \"kubernetes.io/projected/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-kube-api-access-7xbzk\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.953399 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.953409 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.953418 4698 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.974100 4698 generic.go:334] "Generic (PLEG): container finished" podID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerID="f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2" exitCode=0 Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.975147 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.975271 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerDied","Data":"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2"} Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.975299 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e","Type":"ContainerDied","Data":"22bc988ebdd52b6ef66e951182f192ca425b58e7532093db48b8cbcd97bc7303"} Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.975319 4698 scope.go:117] "RemoveContainer" containerID="f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2" Oct 14 10:17:56 crc kubenswrapper[4698]: I1014 10:17:56.983250 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" (UID: "385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.004687 4698 scope.go:117] "RemoveContainer" containerID="3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.027164 4698 scope.go:117] "RemoveContainer" containerID="f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2" Oct 14 10:17:57 crc kubenswrapper[4698]: E1014 10:17:57.030870 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2\": container with ID starting with f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2 not found: ID does not exist" containerID="f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.030952 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2"} err="failed to get container status \"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2\": rpc error: code = NotFound desc = could not find container \"f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2\": container with ID starting with f7c060b244edf49eb0104cfa42c402d6652df14feb4c6e3661e9949aa6a9a7a2 not found: ID does not exist" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.030990 4698 scope.go:117] "RemoveContainer" containerID="3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad" Oct 14 10:17:57 crc kubenswrapper[4698]: E1014 10:17:57.031217 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad\": container with ID starting with 3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad not found: ID does not exist" containerID="3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.031242 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad"} err="failed to get container status \"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad\": rpc error: code = NotFound desc = could not find container \"3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad\": container with ID starting with 3935a4647153bc0d9928a41636d2f10a30ede7283415e2943abdd752df3027ad not found: ID does not exist" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.055565 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.343834 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.352256 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.382231 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:57 crc kubenswrapper[4698]: E1014 10:17:57.382735 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.382753 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" Oct 14 10:17:57 crc kubenswrapper[4698]: E1014 10:17:57.382790 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.382797 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.382985 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-log" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.383018 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" containerName="nova-metadata-metadata" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.384078 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.390203 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.391744 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.400359 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.466170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.466266 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.466319 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c60a34-a183-4fd8-a84e-2963e6676914-logs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.466375 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-config-data\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.466417 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nkw\" (UniqueName: \"kubernetes.io/projected/28c60a34-a183-4fd8-a84e-2963e6676914-kube-api-access-b6nkw\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.568671 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.568945 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.568973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c60a34-a183-4fd8-a84e-2963e6676914-logs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.568997 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-config-data\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.569027 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nkw\" (UniqueName: \"kubernetes.io/projected/28c60a34-a183-4fd8-a84e-2963e6676914-kube-api-access-b6nkw\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.569707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c60a34-a183-4fd8-a84e-2963e6676914-logs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.579488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-config-data\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.579580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.580291 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c60a34-a183-4fd8-a84e-2963e6676914-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.587340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nkw\" (UniqueName: \"kubernetes.io/projected/28c60a34-a183-4fd8-a84e-2963e6676914-kube-api-access-b6nkw\") pod \"nova-metadata-0\" (UID: \"28c60a34-a183-4fd8-a84e-2963e6676914\") " pod="openstack/nova-metadata-0" Oct 14 10:17:57 crc kubenswrapper[4698]: I1014 10:17:57.704171 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.004990 4698 generic.go:334] "Generic (PLEG): container finished" podID="eca9c9b4-7969-459b-87e1-66966d94f354" containerID="d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" exitCode=0 Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.005090 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca9c9b4-7969-459b-87e1-66966d94f354","Type":"ContainerDied","Data":"d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25"} Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.127806 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.181107 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle\") pod \"eca9c9b4-7969-459b-87e1-66966d94f354\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.181184 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data\") pod \"eca9c9b4-7969-459b-87e1-66966d94f354\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.181518 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86phm\" (UniqueName: \"kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm\") pod \"eca9c9b4-7969-459b-87e1-66966d94f354\" (UID: \"eca9c9b4-7969-459b-87e1-66966d94f354\") " Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.188539 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm" (OuterVolumeSpecName: "kube-api-access-86phm") pod "eca9c9b4-7969-459b-87e1-66966d94f354" (UID: "eca9c9b4-7969-459b-87e1-66966d94f354"). InnerVolumeSpecName "kube-api-access-86phm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.215570 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eca9c9b4-7969-459b-87e1-66966d94f354" (UID: "eca9c9b4-7969-459b-87e1-66966d94f354"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.219341 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data" (OuterVolumeSpecName: "config-data") pod "eca9c9b4-7969-459b-87e1-66966d94f354" (UID: "eca9c9b4-7969-459b-87e1-66966d94f354"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.284814 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86phm\" (UniqueName: \"kubernetes.io/projected/eca9c9b4-7969-459b-87e1-66966d94f354-kube-api-access-86phm\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.284865 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.284877 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca9c9b4-7969-459b-87e1-66966d94f354-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:58 crc kubenswrapper[4698]: I1014 10:17:58.288356 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.024227 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.027614 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e" path="/var/lib/kubelet/pods/385a7b7c-0ec6-4d79-8c3c-91e78ab81c3e/volumes" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.030924 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28c60a34-a183-4fd8-a84e-2963e6676914","Type":"ContainerStarted","Data":"800e6fc45ec237215f9b3d18002cfea5340a35c9bf278210b369f7f4183bb071"} Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.030990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28c60a34-a183-4fd8-a84e-2963e6676914","Type":"ContainerStarted","Data":"961823f94c5b7eaa4a361f02d05be33d025dba04633fe6fa051eb235c7212153"} Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.031009 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28c60a34-a183-4fd8-a84e-2963e6676914","Type":"ContainerStarted","Data":"2af00e71afca5f18889ab977d0fe8d05cb75f4a5e7e2748fa46576509b8eda9f"} Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.031025 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eca9c9b4-7969-459b-87e1-66966d94f354","Type":"ContainerDied","Data":"0572dc402fd856f0e485dd60ea6a299672f188b5dcc721bc6a353c30c80dfb02"} Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.031059 4698 scope.go:117] "RemoveContainer" containerID="d7fa515d6973d454755b6a4d4711cc0a7b41d911337ea9ff97a0f4a8f03c0b25" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.071849 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.071826568 podStartE2EDuration="2.071826568s" podCreationTimestamp="2025-10-14 10:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:17:59.065652422 +0000 UTC m=+1260.762951848" watchObservedRunningTime="2025-10-14 10:17:59.071826568 +0000 UTC m=+1260.769125984" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.104235 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.113594 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.123705 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: E1014 10:17:59.124300 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" containerName="nova-scheduler-scheduler" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.124325 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" containerName="nova-scheduler-scheduler" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.124607 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" containerName="nova-scheduler-scheduler" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.125397 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.129122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.130645 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.202881 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.203365 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-config-data\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.203489 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjkc4\" (UniqueName: \"kubernetes.io/projected/0ec9e017-c819-42ce-8a1f-73b89dfa0459-kube-api-access-bjkc4\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.305080 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.305195 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-config-data\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.305270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjkc4\" (UniqueName: \"kubernetes.io/projected/0ec9e017-c819-42ce-8a1f-73b89dfa0459-kube-api-access-bjkc4\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.310602 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.312963 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec9e017-c819-42ce-8a1f-73b89dfa0459-config-data\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.320285 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjkc4\" (UniqueName: \"kubernetes.io/projected/0ec9e017-c819-42ce-8a1f-73b89dfa0459-kube-api-access-bjkc4\") pod \"nova-scheduler-0\" (UID: \"0ec9e017-c819-42ce-8a1f-73b89dfa0459\") " pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.448448 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 10:17:59 crc kubenswrapper[4698]: I1014 10:17:59.866880 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 10:17:59 crc kubenswrapper[4698]: W1014 10:17:59.870831 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec9e017_c819_42ce_8a1f_73b89dfa0459.slice/crio-0adf98042b961891aa5dee3e0d040db56d7fbf3e573264f8b5d29a2582ebadaa WatchSource:0}: Error finding container 0adf98042b961891aa5dee3e0d040db56d7fbf3e573264f8b5d29a2582ebadaa: Status 404 returned error can't find the container with id 0adf98042b961891aa5dee3e0d040db56d7fbf3e573264f8b5d29a2582ebadaa Oct 14 10:18:00 crc kubenswrapper[4698]: I1014 10:18:00.046226 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ec9e017-c819-42ce-8a1f-73b89dfa0459","Type":"ContainerStarted","Data":"ea1be01415370143b2cffc9bb9cb589e9df43c31040d8ac0c534cb868e1b2a2a"} Oct 14 10:18:00 crc kubenswrapper[4698]: I1014 10:18:00.046527 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ec9e017-c819-42ce-8a1f-73b89dfa0459","Type":"ContainerStarted","Data":"0adf98042b961891aa5dee3e0d040db56d7fbf3e573264f8b5d29a2582ebadaa"} Oct 14 10:18:00 crc kubenswrapper[4698]: I1014 10:18:00.075019 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.074993839 podStartE2EDuration="1.074993839s" podCreationTimestamp="2025-10-14 10:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:18:00.062065771 +0000 UTC m=+1261.759365197" watchObservedRunningTime="2025-10-14 10:18:00.074993839 +0000 UTC m=+1261.772293265" Oct 14 10:18:01 crc kubenswrapper[4698]: I1014 10:18:01.029217 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca9c9b4-7969-459b-87e1-66966d94f354" path="/var/lib/kubelet/pods/eca9c9b4-7969-459b-87e1-66966d94f354/volumes" Oct 14 10:18:02 crc kubenswrapper[4698]: I1014 10:18:02.704885 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:18:02 crc kubenswrapper[4698]: I1014 10:18:02.705246 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 10:18:04 crc kubenswrapper[4698]: I1014 10:18:04.448962 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 10:18:04 crc kubenswrapper[4698]: I1014 10:18:04.931288 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:18:04 crc kubenswrapper[4698]: I1014 10:18:04.931339 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 10:18:05 crc kubenswrapper[4698]: I1014 10:18:05.946007 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="066120b9-3158-4234-873d-178f6b65885c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:18:05 crc kubenswrapper[4698]: I1014 10:18:05.946103 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="066120b9-3158-4234-873d-178f6b65885c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:18:07 crc kubenswrapper[4698]: I1014 10:18:07.704902 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 10:18:07 crc kubenswrapper[4698]: I1014 10:18:07.705266 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 10:18:08 crc kubenswrapper[4698]: I1014 10:18:08.716926 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="28c60a34-a183-4fd8-a84e-2963e6676914" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:18:08 crc kubenswrapper[4698]: I1014 10:18:08.717005 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="28c60a34-a183-4fd8-a84e-2963e6676914" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 10:18:09 crc kubenswrapper[4698]: I1014 10:18:09.449059 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 10:18:09 crc kubenswrapper[4698]: I1014 10:18:09.491436 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 10:18:10 crc kubenswrapper[4698]: I1014 10:18:10.224335 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 10:18:12 crc kubenswrapper[4698]: I1014 10:18:12.467940 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 10:18:14 crc kubenswrapper[4698]: I1014 10:18:14.937672 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 10:18:14 crc kubenswrapper[4698]: I1014 10:18:14.938371 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 10:18:14 crc kubenswrapper[4698]: I1014 10:18:14.942313 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 10:18:14 crc kubenswrapper[4698]: I1014 10:18:14.946091 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 10:18:15 crc kubenswrapper[4698]: I1014 10:18:15.228451 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 10:18:15 crc kubenswrapper[4698]: I1014 10:18:15.235185 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 10:18:17 crc kubenswrapper[4698]: I1014 10:18:17.713716 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 10:18:17 crc kubenswrapper[4698]: I1014 10:18:17.714964 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 10:18:17 crc kubenswrapper[4698]: I1014 10:18:17.720720 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 10:18:17 crc kubenswrapper[4698]: I1014 10:18:17.728269 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 10:18:25 crc kubenswrapper[4698]: I1014 10:18:25.691452 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:26 crc kubenswrapper[4698]: I1014 10:18:26.639486 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:30 crc kubenswrapper[4698]: I1014 10:18:30.477010 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="rabbitmq" containerID="cri-o://994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed" gracePeriod=604796 Oct 14 10:18:30 crc kubenswrapper[4698]: I1014 10:18:30.607293 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Oct 14 10:18:30 crc kubenswrapper[4698]: I1014 10:18:30.904066 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="rabbitmq" containerID="cri-o://3c3f0afba5f4ef506546d4373fb1ab912750c618d3655c95fde1229b472d5d2b" gracePeriod=604796 Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.202732 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281469 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281536 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281575 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281673 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281711 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281738 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281776 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281817 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281837 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281938 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.281974 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kjg2\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2\") pod \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\" (UID: \"4c8cdd03-2ef0-496f-8748-d1495be75e5f\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.288950 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.290346 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.290918 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.295110 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.297463 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2" (OuterVolumeSpecName: "kube-api-access-5kjg2") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "kube-api-access-5kjg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.301364 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info" (OuterVolumeSpecName: "pod-info") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.317355 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.332057 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.344701 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data" (OuterVolumeSpecName: "config-data") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388539 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388586 4698 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c8cdd03-2ef0-496f-8748-d1495be75e5f-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388599 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388613 4698 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388644 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388660 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kjg2\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-kube-api-access-5kjg2\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388672 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388685 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.388698 4698 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c8cdd03-2ef0-496f-8748-d1495be75e5f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.430314 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf" (OuterVolumeSpecName: "server-conf") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.454667 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.469490 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4c8cdd03-2ef0-496f-8748-d1495be75e5f" (UID: "4c8cdd03-2ef0-496f-8748-d1495be75e5f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.474558 4698 generic.go:334] "Generic (PLEG): container finished" podID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerID="994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed" exitCode=0 Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.474686 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.474689 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerDied","Data":"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed"} Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.474741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4c8cdd03-2ef0-496f-8748-d1495be75e5f","Type":"ContainerDied","Data":"4ec8400d779dd827fe40344717c9809c218e498a424def8f0c6350fb1d95fe72"} Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.474770 4698 scope.go:117] "RemoveContainer" containerID="994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.483472 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.484534 4698 generic.go:334] "Generic (PLEG): container finished" podID="a710709f-1c22-4fff-b329-6d446917af01" containerID="3c3f0afba5f4ef506546d4373fb1ab912750c618d3655c95fde1229b472d5d2b" exitCode=0 Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.484607 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerDied","Data":"3c3f0afba5f4ef506546d4373fb1ab912750c618d3655c95fde1229b472d5d2b"} Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.491048 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c8cdd03-2ef0-496f-8748-d1495be75e5f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.491087 4698 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c8cdd03-2ef0-496f-8748-d1495be75e5f-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.491103 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.520185 4698 scope.go:117] "RemoveContainer" containerID="f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.563495 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.575220 4698 scope.go:117] "RemoveContainer" containerID="994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.581318 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.583238 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed\": container with ID starting with 994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed not found: ID does not exist" containerID="994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.583282 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed"} err="failed to get container status \"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed\": rpc error: code = NotFound desc = could not find container \"994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed\": container with ID starting with 994ea4a6472c8ce428376e2983af5f26ee62e523e01201f99d5846c3b32033ed not found: ID does not exist" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.583310 4698 scope.go:117] "RemoveContainer" containerID="f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.592938 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.592988 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593097 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593116 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593185 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxbs\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593226 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593259 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593302 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.593399 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret\") pod \"a710709f-1c22-4fff-b329-6d446917af01\" (UID: \"a710709f-1c22-4fff-b329-6d446917af01\") " Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.594563 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81\": container with ID starting with f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81 not found: ID does not exist" containerID="f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.594607 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81"} err="failed to get container status \"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81\": rpc error: code = NotFound desc = could not find container \"f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81\": container with ID starting with f340819271df65223c49ce81de67564ac6f8f57281bb2c7494a270aff251bf81 not found: ID does not exist" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.594637 4698 scope.go:117] "RemoveContainer" containerID="3c3f0afba5f4ef506546d4373fb1ab912750c618d3655c95fde1229b472d5d2b" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.595152 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.597420 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.597931 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.597956 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.598307 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.598510 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.598811 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="setup-container" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.598831 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="setup-container" Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.598856 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.598865 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: E1014 10:18:37.598882 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="setup-container" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.598890 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="setup-container" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.623312 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a710709f-1c22-4fff-b329-6d446917af01" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.623350 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" containerName="rabbitmq" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.624475 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.627575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info" (OuterVolumeSpecName: "pod-info") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.628218 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.629304 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.631593 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.644597 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.644842 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs" (OuterVolumeSpecName: "kube-api-access-xgxbs") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "kube-api-access-xgxbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645298 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645356 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645426 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645481 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645570 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vczn6" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645960 4698 scope.go:117] "RemoveContainer" containerID="02ab09bd1ef174e5d51d3059758a373cd337d02d2d18f0123578e5d49f2c0d75" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645603 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.645686 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695138 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695475 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tgw\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-kube-api-access-s5tgw\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-config-data\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695536 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695558 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695594 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a14f78a2-c755-4288-bf05-45f4a540d301-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695662 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a14f78a2-c755-4288-bf05-45f4a540d301-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695698 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.695738 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696215 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696305 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696437 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696516 4698 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a710709f-1c22-4fff-b329-6d446917af01-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696572 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxbs\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-kube-api-access-xgxbs\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696623 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696681 4698 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696732 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.696799 4698 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a710709f-1c22-4fff-b329-6d446917af01-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.701291 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data" (OuterVolumeSpecName: "config-data") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.750079 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.801913 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf" (OuterVolumeSpecName: "server-conf") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830310 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830400 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830489 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tgw\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-kube-api-access-s5tgw\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830530 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-config-data\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830613 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830676 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a14f78a2-c755-4288-bf05-45f4a540d301-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830707 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a14f78a2-c755-4288-bf05-45f4a540d301-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830823 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.830904 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.831014 4698 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.831033 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.831043 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a710709f-1c22-4fff-b329-6d446917af01-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.831903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.833065 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.836136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.849449 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.849865 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.851698 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.853302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a14f78a2-c755-4288-bf05-45f4a540d301-config-data\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.853726 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.875687 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a14f78a2-c755-4288-bf05-45f4a540d301-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.899549 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a14f78a2-c755-4288-bf05-45f4a540d301-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.900177 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tgw\" (UniqueName: \"kubernetes.io/projected/a14f78a2-c755-4288-bf05-45f4a540d301-kube-api-access-s5tgw\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.957681 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a710709f-1c22-4fff-b329-6d446917af01" (UID: "a710709f-1c22-4fff-b329-6d446917af01"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:37 crc kubenswrapper[4698]: I1014 10:18:37.972045 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"a14f78a2-c755-4288-bf05-45f4a540d301\") " pod="openstack/rabbitmq-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.058654 4698 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a710709f-1c22-4fff-b329-6d446917af01-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.268503 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.500929 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a710709f-1c22-4fff-b329-6d446917af01","Type":"ContainerDied","Data":"cacb6a3302f9917daab8ef528c2cbd9d6e2204ae718b2b2e946d0a545bf7a40d"} Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.501566 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.552636 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.580543 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.592975 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.596046 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.601874 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602173 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602410 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602668 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602690 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602824 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.602967 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xpvck" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.609677 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.777944 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.780462 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.780968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781010 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvvl\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-kube-api-access-hcvvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781066 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781134 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781166 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781193 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781291 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781322 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.781345 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.883629 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.885463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.885590 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.885708 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.885930 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.887275 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.887366 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.886699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.887557 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvvl\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-kube-api-access-hcvvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.887711 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.887847 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.888012 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.888418 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.888673 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.889034 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.890807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.891172 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.892638 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.892716 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.899474 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.907348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.916970 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvvl\" (UniqueName: \"kubernetes.io/projected/cebebf3c-b368-424c-a1bc-a3b9fc82ac3e-kube-api-access-hcvvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.921433 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:38 crc kubenswrapper[4698]: I1014 10:18:38.934076 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:18:39 crc kubenswrapper[4698]: I1014 10:18:39.038344 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8cdd03-2ef0-496f-8748-d1495be75e5f" path="/var/lib/kubelet/pods/4c8cdd03-2ef0-496f-8748-d1495be75e5f/volumes" Oct 14 10:18:39 crc kubenswrapper[4698]: I1014 10:18:39.041696 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a710709f-1c22-4fff-b329-6d446917af01" path="/var/lib/kubelet/pods/a710709f-1c22-4fff-b329-6d446917af01/volumes" Oct 14 10:18:39 crc kubenswrapper[4698]: W1014 10:18:39.437488 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcebebf3c_b368_424c_a1bc_a3b9fc82ac3e.slice/crio-7ac848c33cc6a109fa572d6e380921d5e001c0f9f8655ef90646f46f3cf6d920 WatchSource:0}: Error finding container 7ac848c33cc6a109fa572d6e380921d5e001c0f9f8655ef90646f46f3cf6d920: Status 404 returned error can't find the container with id 7ac848c33cc6a109fa572d6e380921d5e001c0f9f8655ef90646f46f3cf6d920 Oct 14 10:18:39 crc kubenswrapper[4698]: I1014 10:18:39.442844 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 10:18:39 crc kubenswrapper[4698]: I1014 10:18:39.522194 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a14f78a2-c755-4288-bf05-45f4a540d301","Type":"ContainerStarted","Data":"ed66f0bf299fa9f67d1e718d360bd00514a6938297c900f024d6cf885e5afd6f"} Oct 14 10:18:39 crc kubenswrapper[4698]: I1014 10:18:39.523726 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e","Type":"ContainerStarted","Data":"7ac848c33cc6a109fa572d6e380921d5e001c0f9f8655ef90646f46f3cf6d920"} Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.537618 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a14f78a2-c755-4288-bf05-45f4a540d301","Type":"ContainerStarted","Data":"7a3fd3a14a0d6d068586487640743aa2a3a7ed5ab476276795d6680b08a75429"} Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.659074 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.661460 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.665643 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.674922 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.831724 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.831819 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.832144 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.832194 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.832338 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.832517 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf92n\" (UniqueName: \"kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.832693 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934405 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934489 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934541 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934592 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934611 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934628 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.934662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf92n\" (UniqueName: \"kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.935660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.935723 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.935994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.936078 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.936302 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.936644 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.953466 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf92n\" (UniqueName: \"kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n\") pod \"dnsmasq-dns-759799d765-qx7nq\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:40 crc kubenswrapper[4698]: I1014 10:18:40.992589 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:41 crc kubenswrapper[4698]: I1014 10:18:41.552075 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e","Type":"ContainerStarted","Data":"74063ebb305ee6b427cbe960fa2c29fc60edfc07d3ce930115e0ea06b355c5ee"} Oct 14 10:18:41 crc kubenswrapper[4698]: W1014 10:18:41.609842 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f92483a_8595_4ddd_9f88_71696aac6a86.slice/crio-11a332bdd84c61b8300523c16d5b48e668881fc0d792e9c91533655377d8e434 WatchSource:0}: Error finding container 11a332bdd84c61b8300523c16d5b48e668881fc0d792e9c91533655377d8e434: Status 404 returned error can't find the container with id 11a332bdd84c61b8300523c16d5b48e668881fc0d792e9c91533655377d8e434 Oct 14 10:18:41 crc kubenswrapper[4698]: I1014 10:18:41.610121 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:18:42 crc kubenswrapper[4698]: I1014 10:18:42.563716 4698 generic.go:334] "Generic (PLEG): container finished" podID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerID="e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066" exitCode=0 Oct 14 10:18:42 crc kubenswrapper[4698]: I1014 10:18:42.563887 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-qx7nq" event={"ID":"3f92483a-8595-4ddd-9f88-71696aac6a86","Type":"ContainerDied","Data":"e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066"} Oct 14 10:18:42 crc kubenswrapper[4698]: I1014 10:18:42.565149 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-qx7nq" event={"ID":"3f92483a-8595-4ddd-9f88-71696aac6a86","Type":"ContainerStarted","Data":"11a332bdd84c61b8300523c16d5b48e668881fc0d792e9c91533655377d8e434"} Oct 14 10:18:43 crc kubenswrapper[4698]: I1014 10:18:43.604874 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-qx7nq" event={"ID":"3f92483a-8595-4ddd-9f88-71696aac6a86","Type":"ContainerStarted","Data":"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e"} Oct 14 10:18:43 crc kubenswrapper[4698]: I1014 10:18:43.607257 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:43 crc kubenswrapper[4698]: I1014 10:18:43.649823 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-759799d765-qx7nq" podStartSLOduration=3.64979463 podStartE2EDuration="3.64979463s" podCreationTimestamp="2025-10-14 10:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:18:43.63880506 +0000 UTC m=+1305.336104516" watchObservedRunningTime="2025-10-14 10:18:43.64979463 +0000 UTC m=+1305.347094046" Oct 14 10:18:50 crc kubenswrapper[4698]: I1014 10:18:50.995004 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.118146 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.118669 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="dnsmasq-dns" containerID="cri-o://bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de" gracePeriod=10 Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.251665 4698 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.216:5353: connect: connection refused" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.373692 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-jzkll"] Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.376212 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.383756 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-jzkll"] Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494319 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-config\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494381 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494558 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494844 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vggc\" (UniqueName: \"kubernetes.io/projected/d9761eef-5d4d-4aa8-90a8-c94412431e3c-kube-api-access-6vggc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494904 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.494949 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.495153 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597394 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vggc\" (UniqueName: \"kubernetes.io/projected/d9761eef-5d4d-4aa8-90a8-c94412431e3c-kube-api-access-6vggc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597483 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597506 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597556 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597643 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-config\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.597668 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.598488 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.598569 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.598598 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.599004 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.599126 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-config\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.599419 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9761eef-5d4d-4aa8-90a8-c94412431e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.620426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vggc\" (UniqueName: \"kubernetes.io/projected/d9761eef-5d4d-4aa8-90a8-c94412431e3c-kube-api-access-6vggc\") pod \"dnsmasq-dns-5bb847fbb7-jzkll\" (UID: \"d9761eef-5d4d-4aa8-90a8-c94412431e3c\") " pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.671885 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697437 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697458 4698 generic.go:334] "Generic (PLEG): container finished" podID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerID="bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de" exitCode=0 Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" event={"ID":"6b55aeb5-b167-40ac-8e38-f4acf42352ef","Type":"ContainerDied","Data":"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de"} Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697553 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" event={"ID":"6b55aeb5-b167-40ac-8e38-f4acf42352ef","Type":"ContainerDied","Data":"a0299987b07aab6248bc9c9906e19eea7c3db0b3014abe1b7768f880db3a15a9"} Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697574 4698 scope.go:117] "RemoveContainer" containerID="bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.697618 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-rflwl" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.816528 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.816809 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.816939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.816997 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.827747 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.827808 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9\") pod \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\" (UID: \"6b55aeb5-b167-40ac-8e38-f4acf42352ef\") " Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.835264 4698 scope.go:117] "RemoveContainer" containerID="6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.841787 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9" (OuterVolumeSpecName: "kube-api-access-dqrk9") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "kube-api-access-dqrk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.861394 4698 scope.go:117] "RemoveContainer" containerID="bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de" Oct 14 10:18:51 crc kubenswrapper[4698]: E1014 10:18:51.861874 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de\": container with ID starting with bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de not found: ID does not exist" containerID="bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.861926 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de"} err="failed to get container status \"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de\": rpc error: code = NotFound desc = could not find container \"bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de\": container with ID starting with bcafaad872e6facf73811549a6bbdba0fcea9ba47e32d3cb85668b451ee112de not found: ID does not exist" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.861956 4698 scope.go:117] "RemoveContainer" containerID="6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595" Oct 14 10:18:51 crc kubenswrapper[4698]: E1014 10:18:51.862437 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595\": container with ID starting with 6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595 not found: ID does not exist" containerID="6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.862655 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595"} err="failed to get container status \"6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595\": rpc error: code = NotFound desc = could not find container \"6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595\": container with ID starting with 6647f935a6ecb36f4dd4cf5ef71be2163ff7d76eda5e648b92756dc97bef0595 not found: ID does not exist" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.905164 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.907628 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.912380 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config" (OuterVolumeSpecName: "config") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.919500 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.925241 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b55aeb5-b167-40ac-8e38-f4acf42352ef" (UID: "6b55aeb5-b167-40ac-8e38-f4acf42352ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930653 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930685 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930700 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/6b55aeb5-b167-40ac-8e38-f4acf42352ef-kube-api-access-dqrk9\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930715 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930727 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:51 crc kubenswrapper[4698]: I1014 10:18:51.930737 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b55aeb5-b167-40ac-8e38-f4acf42352ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.033894 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.045967 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-rflwl"] Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.273089 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-jzkll"] Oct 14 10:18:52 crc kubenswrapper[4698]: W1014 10:18:52.275859 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9761eef_5d4d_4aa8_90a8_c94412431e3c.slice/crio-eb68ccc7f170623f99d1ee89097d2b8ed4ebede2639aaf13ef72aaf27a97529b WatchSource:0}: Error finding container eb68ccc7f170623f99d1ee89097d2b8ed4ebede2639aaf13ef72aaf27a97529b: Status 404 returned error can't find the container with id eb68ccc7f170623f99d1ee89097d2b8ed4ebede2639aaf13ef72aaf27a97529b Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.711649 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9761eef-5d4d-4aa8-90a8-c94412431e3c" containerID="7ccc14d9805170544652c6087b182788bb764e8748d0a6babb7bf5ab5f5eb021" exitCode=0 Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.711825 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" event={"ID":"d9761eef-5d4d-4aa8-90a8-c94412431e3c","Type":"ContainerDied","Data":"7ccc14d9805170544652c6087b182788bb764e8748d0a6babb7bf5ab5f5eb021"} Oct 14 10:18:52 crc kubenswrapper[4698]: I1014 10:18:52.712369 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" event={"ID":"d9761eef-5d4d-4aa8-90a8-c94412431e3c","Type":"ContainerStarted","Data":"eb68ccc7f170623f99d1ee89097d2b8ed4ebede2639aaf13ef72aaf27a97529b"} Oct 14 10:18:53 crc kubenswrapper[4698]: I1014 10:18:53.030299 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" path="/var/lib/kubelet/pods/6b55aeb5-b167-40ac-8e38-f4acf42352ef/volumes" Oct 14 10:18:53 crc kubenswrapper[4698]: I1014 10:18:53.728903 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" event={"ID":"d9761eef-5d4d-4aa8-90a8-c94412431e3c","Type":"ContainerStarted","Data":"169d8a6c7a8fce3be00cd2d3f14f56ca99393602c10bfe55712571e23f1306b1"} Oct 14 10:18:53 crc kubenswrapper[4698]: I1014 10:18:53.729210 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:18:53 crc kubenswrapper[4698]: I1014 10:18:53.763272 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" podStartSLOduration=2.7632515829999997 podStartE2EDuration="2.763251583s" podCreationTimestamp="2025-10-14 10:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:18:53.755413553 +0000 UTC m=+1315.452713039" watchObservedRunningTime="2025-10-14 10:18:53.763251583 +0000 UTC m=+1315.460551009" Oct 14 10:19:01 crc kubenswrapper[4698]: I1014 10:19:01.699689 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bb847fbb7-jzkll" Oct 14 10:19:01 crc kubenswrapper[4698]: I1014 10:19:01.760109 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:19:01 crc kubenswrapper[4698]: I1014 10:19:01.760682 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-759799d765-qx7nq" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="dnsmasq-dns" containerID="cri-o://c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e" gracePeriod=10 Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.225593 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.307544 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf92n\" (UniqueName: \"kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.307788 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.307847 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.307903 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.307944 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.308063 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.308123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam\") pod \"3f92483a-8595-4ddd-9f88-71696aac6a86\" (UID: \"3f92483a-8595-4ddd-9f88-71696aac6a86\") " Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.315048 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n" (OuterVolumeSpecName: "kube-api-access-kf92n") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "kube-api-access-kf92n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.363324 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.365017 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.373645 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config" (OuterVolumeSpecName: "config") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.382532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.386399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.387344 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f92483a-8595-4ddd-9f88-71696aac6a86" (UID: "3f92483a-8595-4ddd-9f88-71696aac6a86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410284 4698 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410473 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410535 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf92n\" (UniqueName: \"kubernetes.io/projected/3f92483a-8595-4ddd-9f88-71696aac6a86-kube-api-access-kf92n\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410618 4698 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410685 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410802 4698 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.410857 4698 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f92483a-8595-4ddd-9f88-71696aac6a86-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.866204 4698 generic.go:334] "Generic (PLEG): container finished" podID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerID="c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e" exitCode=0 Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.866257 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-qx7nq" event={"ID":"3f92483a-8595-4ddd-9f88-71696aac6a86","Type":"ContainerDied","Data":"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e"} Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.866279 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-qx7nq" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.866307 4698 scope.go:117] "RemoveContainer" containerID="c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.866292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-qx7nq" event={"ID":"3f92483a-8595-4ddd-9f88-71696aac6a86","Type":"ContainerDied","Data":"11a332bdd84c61b8300523c16d5b48e668881fc0d792e9c91533655377d8e434"} Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.964639 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.964946 4698 scope.go:117] "RemoveContainer" containerID="e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066" Oct 14 10:19:02 crc kubenswrapper[4698]: I1014 10:19:02.984985 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-759799d765-qx7nq"] Oct 14 10:19:03 crc kubenswrapper[4698]: I1014 10:19:03.010022 4698 scope.go:117] "RemoveContainer" containerID="c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e" Oct 14 10:19:03 crc kubenswrapper[4698]: E1014 10:19:03.014295 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e\": container with ID starting with c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e not found: ID does not exist" containerID="c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e" Oct 14 10:19:03 crc kubenswrapper[4698]: I1014 10:19:03.014357 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e"} err="failed to get container status \"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e\": rpc error: code = NotFound desc = could not find container \"c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e\": container with ID starting with c551cc81ac5deba21378db1bb5fd0488c3108af9e5e42b7264e7614d8b0e0e8e not found: ID does not exist" Oct 14 10:19:03 crc kubenswrapper[4698]: I1014 10:19:03.014394 4698 scope.go:117] "RemoveContainer" containerID="e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066" Oct 14 10:19:03 crc kubenswrapper[4698]: E1014 10:19:03.014818 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066\": container with ID starting with e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066 not found: ID does not exist" containerID="e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066" Oct 14 10:19:03 crc kubenswrapper[4698]: I1014 10:19:03.014881 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066"} err="failed to get container status \"e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066\": rpc error: code = NotFound desc = could not find container \"e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066\": container with ID starting with e4378bb5c343fa885a0e753d3957166f6f04018da10dc8242e99f84be7ea1066 not found: ID does not exist" Oct 14 10:19:03 crc kubenswrapper[4698]: I1014 10:19:03.028168 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" path="/var/lib/kubelet/pods/3f92483a-8595-4ddd-9f88-71696aac6a86/volumes" Oct 14 10:19:12 crc kubenswrapper[4698]: I1014 10:19:12.977797 4698 generic.go:334] "Generic (PLEG): container finished" podID="a14f78a2-c755-4288-bf05-45f4a540d301" containerID="7a3fd3a14a0d6d068586487640743aa2a3a7ed5ab476276795d6680b08a75429" exitCode=0 Oct 14 10:19:12 crc kubenswrapper[4698]: I1014 10:19:12.977897 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a14f78a2-c755-4288-bf05-45f4a540d301","Type":"ContainerDied","Data":"7a3fd3a14a0d6d068586487640743aa2a3a7ed5ab476276795d6680b08a75429"} Oct 14 10:19:13 crc kubenswrapper[4698]: I1014 10:19:13.989679 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a14f78a2-c755-4288-bf05-45f4a540d301","Type":"ContainerStarted","Data":"5d24cf189015c69368502dddaba8003cf51a7132e7470395fab64e9e5f6c58ad"} Oct 14 10:19:13 crc kubenswrapper[4698]: I1014 10:19:13.989934 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 10:19:13 crc kubenswrapper[4698]: I1014 10:19:13.992290 4698 generic.go:334] "Generic (PLEG): container finished" podID="cebebf3c-b368-424c-a1bc-a3b9fc82ac3e" containerID="74063ebb305ee6b427cbe960fa2c29fc60edfc07d3ce930115e0ea06b355c5ee" exitCode=0 Oct 14 10:19:13 crc kubenswrapper[4698]: I1014 10:19:13.992411 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e","Type":"ContainerDied","Data":"74063ebb305ee6b427cbe960fa2c29fc60edfc07d3ce930115e0ea06b355c5ee"} Oct 14 10:19:14 crc kubenswrapper[4698]: I1014 10:19:14.031989 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.031968045 podStartE2EDuration="37.031968045s" podCreationTimestamp="2025-10-14 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:19:14.026458934 +0000 UTC m=+1335.723758350" watchObservedRunningTime="2025-10-14 10:19:14.031968045 +0000 UTC m=+1335.729267461" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.003840 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebebf3c-b368-424c-a1bc-a3b9fc82ac3e","Type":"ContainerStarted","Data":"621bb3a0d7663d5818a7b035cf3cb3eb133ce008636a5d8d024b0ca692ab9628"} Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.004518 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.053390 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.053362943 podStartE2EDuration="37.053362943s" podCreationTimestamp="2025-10-14 10:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:19:15.045180025 +0000 UTC m=+1336.742479451" watchObservedRunningTime="2025-10-14 10:19:15.053362943 +0000 UTC m=+1336.750662359" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.151665 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw"] Oct 14 10:19:15 crc kubenswrapper[4698]: E1014 10:19:15.152198 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="init" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152216 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="init" Oct 14 10:19:15 crc kubenswrapper[4698]: E1014 10:19:15.152232 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152238 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: E1014 10:19:15.152265 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="init" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152271 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="init" Oct 14 10:19:15 crc kubenswrapper[4698]: E1014 10:19:15.152301 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152307 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152516 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f92483a-8595-4ddd-9f88-71696aac6a86" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.152542 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b55aeb5-b167-40ac-8e38-f4acf42352ef" containerName="dnsmasq-dns" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.153361 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.158162 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.158199 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.158269 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.158384 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.165122 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw"] Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.215412 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hx27\" (UniqueName: \"kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.215454 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.215543 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.215598 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.316736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.317126 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.317181 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hx27\" (UniqueName: \"kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.317205 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.323136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.323349 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.330234 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.337539 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hx27\" (UniqueName: \"kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:15 crc kubenswrapper[4698]: I1014 10:19:15.489415 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:16 crc kubenswrapper[4698]: I1014 10:19:16.091803 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw"] Oct 14 10:19:17 crc kubenswrapper[4698]: I1014 10:19:17.039903 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" event={"ID":"3e24ecfd-2fed-4c41-be7f-89fe09f13724","Type":"ContainerStarted","Data":"04f3d1691baa86a55734edbd40770e3c3e8c9fad875775855b7a281c2f2502a6"} Oct 14 10:19:27 crc kubenswrapper[4698]: I1014 10:19:27.146610 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" event={"ID":"3e24ecfd-2fed-4c41-be7f-89fe09f13724","Type":"ContainerStarted","Data":"0a8001c81ef0a277ff8144d5ed4f11569be3f37513f83d144ac43ca51bf152c4"} Oct 14 10:19:27 crc kubenswrapper[4698]: I1014 10:19:27.168694 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" podStartSLOduration=1.646596927 podStartE2EDuration="12.168666502s" podCreationTimestamp="2025-10-14 10:19:15 +0000 UTC" firstStartedPulling="2025-10-14 10:19:16.096037353 +0000 UTC m=+1337.793336769" lastFinishedPulling="2025-10-14 10:19:26.618106928 +0000 UTC m=+1348.315406344" observedRunningTime="2025-10-14 10:19:27.164742562 +0000 UTC m=+1348.862042038" watchObservedRunningTime="2025-10-14 10:19:27.168666502 +0000 UTC m=+1348.865965958" Oct 14 10:19:28 crc kubenswrapper[4698]: I1014 10:19:28.272137 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 10:19:28 crc kubenswrapper[4698]: I1014 10:19:28.939040 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 10:19:38 crc kubenswrapper[4698]: I1014 10:19:38.260724 4698 generic.go:334] "Generic (PLEG): container finished" podID="3e24ecfd-2fed-4c41-be7f-89fe09f13724" containerID="0a8001c81ef0a277ff8144d5ed4f11569be3f37513f83d144ac43ca51bf152c4" exitCode=0 Oct 14 10:19:38 crc kubenswrapper[4698]: I1014 10:19:38.260847 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" event={"ID":"3e24ecfd-2fed-4c41-be7f-89fe09f13724","Type":"ContainerDied","Data":"0a8001c81ef0a277ff8144d5ed4f11569be3f37513f83d144ac43ca51bf152c4"} Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.757035 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.896620 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hx27\" (UniqueName: \"kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27\") pod \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.896834 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle\") pod \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.896968 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key\") pod \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.897023 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory\") pod \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\" (UID: \"3e24ecfd-2fed-4c41-be7f-89fe09f13724\") " Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.904029 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27" (OuterVolumeSpecName: "kube-api-access-2hx27") pod "3e24ecfd-2fed-4c41-be7f-89fe09f13724" (UID: "3e24ecfd-2fed-4c41-be7f-89fe09f13724"). InnerVolumeSpecName "kube-api-access-2hx27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.905174 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3e24ecfd-2fed-4c41-be7f-89fe09f13724" (UID: "3e24ecfd-2fed-4c41-be7f-89fe09f13724"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.934406 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3e24ecfd-2fed-4c41-be7f-89fe09f13724" (UID: "3e24ecfd-2fed-4c41-be7f-89fe09f13724"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:19:39 crc kubenswrapper[4698]: I1014 10:19:39.938129 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory" (OuterVolumeSpecName: "inventory") pod "3e24ecfd-2fed-4c41-be7f-89fe09f13724" (UID: "3e24ecfd-2fed-4c41-be7f-89fe09f13724"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.000868 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.000958 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.001203 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hx27\" (UniqueName: \"kubernetes.io/projected/3e24ecfd-2fed-4c41-be7f-89fe09f13724-kube-api-access-2hx27\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.001263 4698 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e24ecfd-2fed-4c41-be7f-89fe09f13724-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.325931 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" event={"ID":"3e24ecfd-2fed-4c41-be7f-89fe09f13724","Type":"ContainerDied","Data":"04f3d1691baa86a55734edbd40770e3c3e8c9fad875775855b7a281c2f2502a6"} Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.325995 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f3d1691baa86a55734edbd40770e3c3e8c9fad875775855b7a281c2f2502a6" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.326111 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.366664 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2"] Oct 14 10:19:40 crc kubenswrapper[4698]: E1014 10:19:40.367227 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e24ecfd-2fed-4c41-be7f-89fe09f13724" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.367250 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e24ecfd-2fed-4c41-be7f-89fe09f13724" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.367504 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e24ecfd-2fed-4c41-be7f-89fe09f13724" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.368280 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.370386 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.374122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.374466 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.374823 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.378517 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2"] Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.512386 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.512624 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2lx\" (UniqueName: \"kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.512825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.614664 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.614735 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk2lx\" (UniqueName: \"kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.614806 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.618731 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.620453 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.633327 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk2lx\" (UniqueName: \"kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-jdls2\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:40 crc kubenswrapper[4698]: I1014 10:19:40.699414 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:41 crc kubenswrapper[4698]: I1014 10:19:41.358474 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2"] Oct 14 10:19:42 crc kubenswrapper[4698]: I1014 10:19:42.350708 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" event={"ID":"06b2a1a6-bc42-4191-9ab7-62c064090d6b","Type":"ContainerStarted","Data":"1b9e3ab40ed9d8b0799f3c253131aef7e5c314b88967fca4266d4b6791f60160"} Oct 14 10:19:43 crc kubenswrapper[4698]: I1014 10:19:43.367667 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" event={"ID":"06b2a1a6-bc42-4191-9ab7-62c064090d6b","Type":"ContainerStarted","Data":"26b3ec835848493ae014041ce75592d47eceeb8d0bb3d6928822921214c1f0dd"} Oct 14 10:19:43 crc kubenswrapper[4698]: I1014 10:19:43.399496 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" podStartSLOduration=2.618852124 podStartE2EDuration="3.399463883s" podCreationTimestamp="2025-10-14 10:19:40 +0000 UTC" firstStartedPulling="2025-10-14 10:19:41.359815517 +0000 UTC m=+1363.057114933" lastFinishedPulling="2025-10-14 10:19:42.140427276 +0000 UTC m=+1363.837726692" observedRunningTime="2025-10-14 10:19:43.390738261 +0000 UTC m=+1365.088037747" watchObservedRunningTime="2025-10-14 10:19:43.399463883 +0000 UTC m=+1365.096763339" Oct 14 10:19:45 crc kubenswrapper[4698]: I1014 10:19:45.389820 4698 generic.go:334] "Generic (PLEG): container finished" podID="06b2a1a6-bc42-4191-9ab7-62c064090d6b" containerID="26b3ec835848493ae014041ce75592d47eceeb8d0bb3d6928822921214c1f0dd" exitCode=0 Oct 14 10:19:45 crc kubenswrapper[4698]: I1014 10:19:45.389980 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" event={"ID":"06b2a1a6-bc42-4191-9ab7-62c064090d6b","Type":"ContainerDied","Data":"26b3ec835848493ae014041ce75592d47eceeb8d0bb3d6928822921214c1f0dd"} Oct 14 10:19:46 crc kubenswrapper[4698]: I1014 10:19:46.902744 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.064700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk2lx\" (UniqueName: \"kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx\") pod \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.064939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory\") pod \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.065070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key\") pod \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\" (UID: \"06b2a1a6-bc42-4191-9ab7-62c064090d6b\") " Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.072917 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx" (OuterVolumeSpecName: "kube-api-access-lk2lx") pod "06b2a1a6-bc42-4191-9ab7-62c064090d6b" (UID: "06b2a1a6-bc42-4191-9ab7-62c064090d6b"). InnerVolumeSpecName "kube-api-access-lk2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.103535 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06b2a1a6-bc42-4191-9ab7-62c064090d6b" (UID: "06b2a1a6-bc42-4191-9ab7-62c064090d6b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.107014 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory" (OuterVolumeSpecName: "inventory") pod "06b2a1a6-bc42-4191-9ab7-62c064090d6b" (UID: "06b2a1a6-bc42-4191-9ab7-62c064090d6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.169941 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.170343 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk2lx\" (UniqueName: \"kubernetes.io/projected/06b2a1a6-bc42-4191-9ab7-62c064090d6b-kube-api-access-lk2lx\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.170359 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b2a1a6-bc42-4191-9ab7-62c064090d6b-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.428156 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" event={"ID":"06b2a1a6-bc42-4191-9ab7-62c064090d6b","Type":"ContainerDied","Data":"1b9e3ab40ed9d8b0799f3c253131aef7e5c314b88967fca4266d4b6791f60160"} Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.428672 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b9e3ab40ed9d8b0799f3c253131aef7e5c314b88967fca4266d4b6791f60160" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.428208 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-jdls2" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.515170 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr"] Oct 14 10:19:47 crc kubenswrapper[4698]: E1014 10:19:47.517750 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b2a1a6-bc42-4191-9ab7-62c064090d6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.517806 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b2a1a6-bc42-4191-9ab7-62c064090d6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.519817 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b2a1a6-bc42-4191-9ab7-62c064090d6b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.539137 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.544293 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.544488 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.544711 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.551141 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.573531 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr"] Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.680900 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.681008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.681122 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.681161 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npv2t\" (UniqueName: \"kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.783736 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.783822 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npv2t\" (UniqueName: \"kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.783927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.783966 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.789550 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.789566 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.800623 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.803618 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npv2t\" (UniqueName: \"kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:47 crc kubenswrapper[4698]: I1014 10:19:47.869037 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:19:48 crc kubenswrapper[4698]: I1014 10:19:48.401187 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr"] Oct 14 10:19:48 crc kubenswrapper[4698]: I1014 10:19:48.438539 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" event={"ID":"d4559dff-03d5-4c1b-a8df-f8fc0ae935de","Type":"ContainerStarted","Data":"1e23e18cd500d926b3e5ec2e9405ae53e3048bd945d2530d33f5ba656d86174f"} Oct 14 10:19:49 crc kubenswrapper[4698]: I1014 10:19:49.452414 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" event={"ID":"d4559dff-03d5-4c1b-a8df-f8fc0ae935de","Type":"ContainerStarted","Data":"360a90457e0db54c8ec77453be38d36900bdd03dbb92411f0858b317ec613e26"} Oct 14 10:19:49 crc kubenswrapper[4698]: I1014 10:19:49.484498 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" podStartSLOduration=1.894128143 podStartE2EDuration="2.48446972s" podCreationTimestamp="2025-10-14 10:19:47 +0000 UTC" firstStartedPulling="2025-10-14 10:19:48.412633907 +0000 UTC m=+1370.109933323" lastFinishedPulling="2025-10-14 10:19:49.002975484 +0000 UTC m=+1370.700274900" observedRunningTime="2025-10-14 10:19:49.474044824 +0000 UTC m=+1371.171344260" watchObservedRunningTime="2025-10-14 10:19:49.48446972 +0000 UTC m=+1371.181769146" Oct 14 10:20:07 crc kubenswrapper[4698]: I1014 10:20:07.772545 4698 scope.go:117] "RemoveContainer" containerID="f5f875344e0fdb8640ba85a1742236ef4f7c4e163ae348a014acc6c9f06b3ae2" Oct 14 10:20:23 crc kubenswrapper[4698]: I1014 10:20:23.908388 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:20:23 crc kubenswrapper[4698]: I1014 10:20:23.909131 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:20:53 crc kubenswrapper[4698]: I1014 10:20:53.907969 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:20:53 crc kubenswrapper[4698]: I1014 10:20:53.908740 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.721169 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.729069 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.741568 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.757802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.758024 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.758176 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnxg\" (UniqueName: \"kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.859855 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.860011 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfnxg\" (UniqueName: \"kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.860049 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.860437 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.860446 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:05 crc kubenswrapper[4698]: I1014 10:21:05.880707 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfnxg\" (UniqueName: \"kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg\") pod \"redhat-marketplace-7gn5n\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:06 crc kubenswrapper[4698]: I1014 10:21:06.076569 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:06 crc kubenswrapper[4698]: I1014 10:21:06.546046 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:06 crc kubenswrapper[4698]: E1014 10:21:06.917408 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef8bfa0_71b2_4fcb_837b_191b1ee5e113.slice/crio-conmon-999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.426917 4698 generic.go:334] "Generic (PLEG): container finished" podID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerID="999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637" exitCode=0 Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.426976 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerDied","Data":"999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637"} Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.427822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerStarted","Data":"88462940abd11b80c35ee038c0da5a2f7d571dd1fd93c3f0eb99ff86ecbcecc5"} Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.861254 4698 scope.go:117] "RemoveContainer" containerID="393471ee803b2f6bdb94dbb502c32fa759670f44814d5f995e9836fa400b1b05" Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.895631 4698 scope.go:117] "RemoveContainer" containerID="fe1ae4f98d73812377657f7d3d462daa5749aeb44eca817da5e339d837c16a39" Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.928985 4698 scope.go:117] "RemoveContainer" containerID="883639fa3508d2924a6bfc2cb0fdbdd2dee926e8b7c81c063de63f0ac38f5194" Oct 14 10:21:07 crc kubenswrapper[4698]: I1014 10:21:07.990159 4698 scope.go:117] "RemoveContainer" containerID="386102c65388fad9deb6f3de090829e956a0f05bec6707acaabf79a9eb363e43" Oct 14 10:21:08 crc kubenswrapper[4698]: I1014 10:21:08.042054 4698 scope.go:117] "RemoveContainer" containerID="e594988c974ff543f9c2d584ca251398c27ef372098425e5b40ffa91a25c8a5c" Oct 14 10:21:08 crc kubenswrapper[4698]: I1014 10:21:08.258395 4698 scope.go:117] "RemoveContainer" containerID="a3e7f024917d736a019536c08327e3cfcb12b76ce64cd1734049107a960bd061" Oct 14 10:21:09 crc kubenswrapper[4698]: I1014 10:21:09.455790 4698 generic.go:334] "Generic (PLEG): container finished" podID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerID="86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be" exitCode=0 Oct 14 10:21:09 crc kubenswrapper[4698]: I1014 10:21:09.456803 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerDied","Data":"86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be"} Oct 14 10:21:10 crc kubenswrapper[4698]: I1014 10:21:10.472805 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerStarted","Data":"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3"} Oct 14 10:21:10 crc kubenswrapper[4698]: I1014 10:21:10.499247 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7gn5n" podStartSLOduration=2.747420756 podStartE2EDuration="5.499217024s" podCreationTimestamp="2025-10-14 10:21:05 +0000 UTC" firstStartedPulling="2025-10-14 10:21:07.429506986 +0000 UTC m=+1449.126806402" lastFinishedPulling="2025-10-14 10:21:10.181303224 +0000 UTC m=+1451.878602670" observedRunningTime="2025-10-14 10:21:10.493180502 +0000 UTC m=+1452.190479958" watchObservedRunningTime="2025-10-14 10:21:10.499217024 +0000 UTC m=+1452.196516470" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.343885 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.347051 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.361497 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.520644 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.520740 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hz9b\" (UniqueName: \"kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.520852 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.622994 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.623106 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hz9b\" (UniqueName: \"kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.623181 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.623600 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.623800 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.654934 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hz9b\" (UniqueName: \"kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b\") pod \"community-operators-f9shb\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:12 crc kubenswrapper[4698]: I1014 10:21:12.712865 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:13 crc kubenswrapper[4698]: W1014 10:21:13.258213 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4687a5dd_eacd_4576_bea3_fd589e91ae56.slice/crio-efec5b52c7a7775986418195819294d2ea796889f8ee2c36c6215d2a8f997c51 WatchSource:0}: Error finding container efec5b52c7a7775986418195819294d2ea796889f8ee2c36c6215d2a8f997c51: Status 404 returned error can't find the container with id efec5b52c7a7775986418195819294d2ea796889f8ee2c36c6215d2a8f997c51 Oct 14 10:21:13 crc kubenswrapper[4698]: I1014 10:21:13.262188 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:13 crc kubenswrapper[4698]: I1014 10:21:13.505616 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerStarted","Data":"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732"} Oct 14 10:21:13 crc kubenswrapper[4698]: I1014 10:21:13.505659 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerStarted","Data":"efec5b52c7a7775986418195819294d2ea796889f8ee2c36c6215d2a8f997c51"} Oct 14 10:21:14 crc kubenswrapper[4698]: I1014 10:21:14.532673 4698 generic.go:334] "Generic (PLEG): container finished" podID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerID="9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732" exitCode=0 Oct 14 10:21:14 crc kubenswrapper[4698]: I1014 10:21:14.532755 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerDied","Data":"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732"} Oct 14 10:21:15 crc kubenswrapper[4698]: I1014 10:21:15.543748 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerStarted","Data":"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69"} Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.077433 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.077909 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.129789 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.556494 4698 generic.go:334] "Generic (PLEG): container finished" podID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerID="bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69" exitCode=0 Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.556559 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerDied","Data":"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69"} Oct 14 10:21:16 crc kubenswrapper[4698]: I1014 10:21:16.638429 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:17 crc kubenswrapper[4698]: I1014 10:21:17.571894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerStarted","Data":"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf"} Oct 14 10:21:17 crc kubenswrapper[4698]: I1014 10:21:17.604427 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9shb" podStartSLOduration=3.05775524 podStartE2EDuration="5.604396662s" podCreationTimestamp="2025-10-14 10:21:12 +0000 UTC" firstStartedPulling="2025-10-14 10:21:14.53808123 +0000 UTC m=+1456.235380686" lastFinishedPulling="2025-10-14 10:21:17.084722692 +0000 UTC m=+1458.782022108" observedRunningTime="2025-10-14 10:21:17.599483132 +0000 UTC m=+1459.296782588" watchObservedRunningTime="2025-10-14 10:21:17.604396662 +0000 UTC m=+1459.301696108" Oct 14 10:21:18 crc kubenswrapper[4698]: I1014 10:21:18.474383 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:18 crc kubenswrapper[4698]: I1014 10:21:18.581840 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7gn5n" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="registry-server" containerID="cri-o://1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3" gracePeriod=2 Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.098291 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.290051 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities\") pod \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.290166 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfnxg\" (UniqueName: \"kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg\") pod \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.290210 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content\") pod \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\" (UID: \"eef8bfa0-71b2-4fcb-837b-191b1ee5e113\") " Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.290665 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities" (OuterVolumeSpecName: "utilities") pod "eef8bfa0-71b2-4fcb-837b-191b1ee5e113" (UID: "eef8bfa0-71b2-4fcb-837b-191b1ee5e113"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.291246 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.298084 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg" (OuterVolumeSpecName: "kube-api-access-dfnxg") pod "eef8bfa0-71b2-4fcb-837b-191b1ee5e113" (UID: "eef8bfa0-71b2-4fcb-837b-191b1ee5e113"). InnerVolumeSpecName "kube-api-access-dfnxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.316734 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eef8bfa0-71b2-4fcb-837b-191b1ee5e113" (UID: "eef8bfa0-71b2-4fcb-837b-191b1ee5e113"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.392794 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfnxg\" (UniqueName: \"kubernetes.io/projected/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-kube-api-access-dfnxg\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.393123 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef8bfa0-71b2-4fcb-837b-191b1ee5e113-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.593622 4698 generic.go:334] "Generic (PLEG): container finished" podID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerID="1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3" exitCode=0 Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.593672 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerDied","Data":"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3"} Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.593715 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7gn5n" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.593738 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7gn5n" event={"ID":"eef8bfa0-71b2-4fcb-837b-191b1ee5e113","Type":"ContainerDied","Data":"88462940abd11b80c35ee038c0da5a2f7d571dd1fd93c3f0eb99ff86ecbcecc5"} Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.593775 4698 scope.go:117] "RemoveContainer" containerID="1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.654211 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.657010 4698 scope.go:117] "RemoveContainer" containerID="86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.665730 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7gn5n"] Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.683533 4698 scope.go:117] "RemoveContainer" containerID="999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.738556 4698 scope.go:117] "RemoveContainer" containerID="1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3" Oct 14 10:21:19 crc kubenswrapper[4698]: E1014 10:21:19.739224 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3\": container with ID starting with 1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3 not found: ID does not exist" containerID="1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.739262 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3"} err="failed to get container status \"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3\": rpc error: code = NotFound desc = could not find container \"1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3\": container with ID starting with 1a7b2f6bfadb107f44c0cd52fdf1a71340ad7980ee40602f2de34cca1bcda1c3 not found: ID does not exist" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.739287 4698 scope.go:117] "RemoveContainer" containerID="86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be" Oct 14 10:21:19 crc kubenswrapper[4698]: E1014 10:21:19.739717 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be\": container with ID starting with 86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be not found: ID does not exist" containerID="86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.739788 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be"} err="failed to get container status \"86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be\": rpc error: code = NotFound desc = could not find container \"86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be\": container with ID starting with 86d2365b317fadfca62415f064d43bf4c6b0b380981a6daeac3002d01751e3be not found: ID does not exist" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.739808 4698 scope.go:117] "RemoveContainer" containerID="999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637" Oct 14 10:21:19 crc kubenswrapper[4698]: E1014 10:21:19.740298 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637\": container with ID starting with 999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637 not found: ID does not exist" containerID="999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637" Oct 14 10:21:19 crc kubenswrapper[4698]: I1014 10:21:19.740326 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637"} err="failed to get container status \"999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637\": rpc error: code = NotFound desc = could not find container \"999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637\": container with ID starting with 999bbe64115bd397446c3ea56a8d473c55d8c9248a530e16117056788066d637 not found: ID does not exist" Oct 14 10:21:21 crc kubenswrapper[4698]: I1014 10:21:21.030160 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" path="/var/lib/kubelet/pods/eef8bfa0-71b2-4fcb-837b-191b1ee5e113/volumes" Oct 14 10:21:22 crc kubenswrapper[4698]: I1014 10:21:22.713422 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:22 crc kubenswrapper[4698]: I1014 10:21:22.713853 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:22 crc kubenswrapper[4698]: I1014 10:21:22.786231 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.710530 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.762957 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.908719 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.908853 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.908936 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.910328 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:21:23 crc kubenswrapper[4698]: I1014 10:21:23.910432 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a" gracePeriod=600 Oct 14 10:21:24 crc kubenswrapper[4698]: I1014 10:21:24.660571 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a" exitCode=0 Oct 14 10:21:24 crc kubenswrapper[4698]: I1014 10:21:24.660687 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a"} Oct 14 10:21:24 crc kubenswrapper[4698]: I1014 10:21:24.661327 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851"} Oct 14 10:21:24 crc kubenswrapper[4698]: I1014 10:21:24.661363 4698 scope.go:117] "RemoveContainer" containerID="a4afbcf56453a0a6e9f269b4b6668c5bb2f9345d8d8d81fe69dd3ad317e2716b" Oct 14 10:21:25 crc kubenswrapper[4698]: I1014 10:21:25.689470 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9shb" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="registry-server" containerID="cri-o://dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf" gracePeriod=2 Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.225425 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.356816 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hz9b\" (UniqueName: \"kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b\") pod \"4687a5dd-eacd-4576-bea3-fd589e91ae56\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.356932 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content\") pod \"4687a5dd-eacd-4576-bea3-fd589e91ae56\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.356978 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities\") pod \"4687a5dd-eacd-4576-bea3-fd589e91ae56\" (UID: \"4687a5dd-eacd-4576-bea3-fd589e91ae56\") " Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.358549 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities" (OuterVolumeSpecName: "utilities") pod "4687a5dd-eacd-4576-bea3-fd589e91ae56" (UID: "4687a5dd-eacd-4576-bea3-fd589e91ae56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.363889 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b" (OuterVolumeSpecName: "kube-api-access-2hz9b") pod "4687a5dd-eacd-4576-bea3-fd589e91ae56" (UID: "4687a5dd-eacd-4576-bea3-fd589e91ae56"). InnerVolumeSpecName "kube-api-access-2hz9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.437219 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4687a5dd-eacd-4576-bea3-fd589e91ae56" (UID: "4687a5dd-eacd-4576-bea3-fd589e91ae56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.460037 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hz9b\" (UniqueName: \"kubernetes.io/projected/4687a5dd-eacd-4576-bea3-fd589e91ae56-kube-api-access-2hz9b\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.460077 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.460090 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4687a5dd-eacd-4576-bea3-fd589e91ae56-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.705647 4698 generic.go:334] "Generic (PLEG): container finished" podID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerID="dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf" exitCode=0 Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.705711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerDied","Data":"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf"} Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.706060 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9shb" event={"ID":"4687a5dd-eacd-4576-bea3-fd589e91ae56","Type":"ContainerDied","Data":"efec5b52c7a7775986418195819294d2ea796889f8ee2c36c6215d2a8f997c51"} Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.705847 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9shb" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.706083 4698 scope.go:117] "RemoveContainer" containerID="dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.730703 4698 scope.go:117] "RemoveContainer" containerID="bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.757176 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.772112 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9shb"] Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.850925 4698 scope.go:117] "RemoveContainer" containerID="9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.873053 4698 scope.go:117] "RemoveContainer" containerID="dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf" Oct 14 10:21:26 crc kubenswrapper[4698]: E1014 10:21:26.873502 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf\": container with ID starting with dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf not found: ID does not exist" containerID="dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.873540 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf"} err="failed to get container status \"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf\": rpc error: code = NotFound desc = could not find container \"dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf\": container with ID starting with dc9583de4876b0014507ca8c23ffbcf1cc5a58288a243922a4c1872fd6eb71bf not found: ID does not exist" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.873568 4698 scope.go:117] "RemoveContainer" containerID="bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69" Oct 14 10:21:26 crc kubenswrapper[4698]: E1014 10:21:26.873939 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69\": container with ID starting with bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69 not found: ID does not exist" containerID="bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.874008 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69"} err="failed to get container status \"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69\": rpc error: code = NotFound desc = could not find container \"bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69\": container with ID starting with bd445f90fbef3f9991c75a7d744a8eff9c579821b14bcc66d28fa30dff04fb69 not found: ID does not exist" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.874049 4698 scope.go:117] "RemoveContainer" containerID="9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732" Oct 14 10:21:26 crc kubenswrapper[4698]: E1014 10:21:26.874540 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732\": container with ID starting with 9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732 not found: ID does not exist" containerID="9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732" Oct 14 10:21:26 crc kubenswrapper[4698]: I1014 10:21:26.874593 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732"} err="failed to get container status \"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732\": rpc error: code = NotFound desc = could not find container \"9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732\": container with ID starting with 9c2787a21ca35d492da0d008f71c8deec619196e8c15e705dac44191af76b732 not found: ID does not exist" Oct 14 10:21:27 crc kubenswrapper[4698]: I1014 10:21:27.036161 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" path="/var/lib/kubelet/pods/4687a5dd-eacd-4576-bea3-fd589e91ae56/volumes" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.348149 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349530 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="extract-content" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349549 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="extract-content" Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349563 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349571 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349586 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="extract-content" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349594 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="extract-content" Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349621 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="extract-utilities" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349628 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="extract-utilities" Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349641 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349649 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: E1014 10:21:45.349675 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="extract-utilities" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349684 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="extract-utilities" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349960 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="4687a5dd-eacd-4576-bea3-fd589e91ae56" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.349980 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef8bfa0-71b2-4fcb-837b-191b1ee5e113" containerName="registry-server" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.352044 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.360282 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.530185 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.530257 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.530527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.633368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.633448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.633548 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.634270 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.634279 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.661003 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk\") pod \"certified-operators-kj7cb\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:45 crc kubenswrapper[4698]: I1014 10:21:45.708732 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:46 crc kubenswrapper[4698]: I1014 10:21:46.191611 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:46 crc kubenswrapper[4698]: I1014 10:21:46.975404 4698 generic.go:334] "Generic (PLEG): container finished" podID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerID="862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e" exitCode=0 Oct 14 10:21:46 crc kubenswrapper[4698]: I1014 10:21:46.975494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerDied","Data":"862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e"} Oct 14 10:21:46 crc kubenswrapper[4698]: I1014 10:21:46.975722 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerStarted","Data":"24fe886a976293572c02e26d4641adc9be39ea83460293b14cd4a004f9e13c1b"} Oct 14 10:21:46 crc kubenswrapper[4698]: I1014 10:21:46.977269 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:21:49 crc kubenswrapper[4698]: I1014 10:21:48.999712 4698 generic.go:334] "Generic (PLEG): container finished" podID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerID="3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa" exitCode=0 Oct 14 10:21:49 crc kubenswrapper[4698]: I1014 10:21:49.000185 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerDied","Data":"3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa"} Oct 14 10:21:50 crc kubenswrapper[4698]: I1014 10:21:50.012346 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerStarted","Data":"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22"} Oct 14 10:21:50 crc kubenswrapper[4698]: I1014 10:21:50.036182 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kj7cb" podStartSLOduration=2.416985908 podStartE2EDuration="5.036160637s" podCreationTimestamp="2025-10-14 10:21:45 +0000 UTC" firstStartedPulling="2025-10-14 10:21:46.977038911 +0000 UTC m=+1488.674338327" lastFinishedPulling="2025-10-14 10:21:49.59621364 +0000 UTC m=+1491.293513056" observedRunningTime="2025-10-14 10:21:50.029394254 +0000 UTC m=+1491.726693680" watchObservedRunningTime="2025-10-14 10:21:50.036160637 +0000 UTC m=+1491.733460053" Oct 14 10:21:55 crc kubenswrapper[4698]: I1014 10:21:55.709168 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:55 crc kubenswrapper[4698]: I1014 10:21:55.709969 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:55 crc kubenswrapper[4698]: I1014 10:21:55.783157 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:56 crc kubenswrapper[4698]: I1014 10:21:56.138276 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:56 crc kubenswrapper[4698]: I1014 10:21:56.190274 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.111384 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kj7cb" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="registry-server" containerID="cri-o://662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22" gracePeriod=2 Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.625408 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.634117 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities\") pod \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.634481 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content\") pod \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.634529 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk\") pod \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\" (UID: \"e75f3484-78a7-4ee4-85b5-fde82b50a6d7\") " Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.635546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities" (OuterVolumeSpecName: "utilities") pod "e75f3484-78a7-4ee4-85b5-fde82b50a6d7" (UID: "e75f3484-78a7-4ee4-85b5-fde82b50a6d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.635754 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.642018 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk" (OuterVolumeSpecName: "kube-api-access-g98rk") pod "e75f3484-78a7-4ee4-85b5-fde82b50a6d7" (UID: "e75f3484-78a7-4ee4-85b5-fde82b50a6d7"). InnerVolumeSpecName "kube-api-access-g98rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.739294 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-kube-api-access-g98rk\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.822202 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e75f3484-78a7-4ee4-85b5-fde82b50a6d7" (UID: "e75f3484-78a7-4ee4-85b5-fde82b50a6d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:21:58 crc kubenswrapper[4698]: I1014 10:21:58.841070 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75f3484-78a7-4ee4-85b5-fde82b50a6d7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.131432 4698 generic.go:334] "Generic (PLEG): container finished" podID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerID="662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22" exitCode=0 Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.133382 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerDied","Data":"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22"} Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.133538 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj7cb" event={"ID":"e75f3484-78a7-4ee4-85b5-fde82b50a6d7","Type":"ContainerDied","Data":"24fe886a976293572c02e26d4641adc9be39ea83460293b14cd4a004f9e13c1b"} Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.133629 4698 scope.go:117] "RemoveContainer" containerID="662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.134037 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj7cb" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.178988 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.179952 4698 scope.go:117] "RemoveContainer" containerID="3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.192011 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kj7cb"] Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.202909 4698 scope.go:117] "RemoveContainer" containerID="862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.247900 4698 scope.go:117] "RemoveContainer" containerID="662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22" Oct 14 10:21:59 crc kubenswrapper[4698]: E1014 10:21:59.248886 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22\": container with ID starting with 662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22 not found: ID does not exist" containerID="662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.248933 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22"} err="failed to get container status \"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22\": rpc error: code = NotFound desc = could not find container \"662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22\": container with ID starting with 662d6cacc11ea10b95cc10d6f2543edf2c14b2e4940d4a41b25a23d44a918d22 not found: ID does not exist" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.248968 4698 scope.go:117] "RemoveContainer" containerID="3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa" Oct 14 10:21:59 crc kubenswrapper[4698]: E1014 10:21:59.249292 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa\": container with ID starting with 3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa not found: ID does not exist" containerID="3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.249330 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa"} err="failed to get container status \"3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa\": rpc error: code = NotFound desc = could not find container \"3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa\": container with ID starting with 3026fee610609332aed055a6cc1593dbb536f3821c0ef0b7a9da3addebcd7afa not found: ID does not exist" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.249357 4698 scope.go:117] "RemoveContainer" containerID="862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e" Oct 14 10:21:59 crc kubenswrapper[4698]: E1014 10:21:59.249778 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e\": container with ID starting with 862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e not found: ID does not exist" containerID="862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e" Oct 14 10:21:59 crc kubenswrapper[4698]: I1014 10:21:59.249808 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e"} err="failed to get container status \"862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e\": rpc error: code = NotFound desc = could not find container \"862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e\": container with ID starting with 862933a105f925ba675fc763594435938db605ed7469b640fee84e8b3a04988e not found: ID does not exist" Oct 14 10:22:01 crc kubenswrapper[4698]: I1014 10:22:01.030081 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" path="/var/lib/kubelet/pods/e75f3484-78a7-4ee4-85b5-fde82b50a6d7/volumes" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.404649 4698 scope.go:117] "RemoveContainer" containerID="5c9805414f9497fd945631174a09b08928cb189e2af428a18a49d6e8cc4cc84c" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.450310 4698 scope.go:117] "RemoveContainer" containerID="50816bb6c791d0061cb027708547cabdce9ae96816de3a6d22de87d758cdf8fd" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.490857 4698 scope.go:117] "RemoveContainer" containerID="ee066c20559aa379186cf2d3fd6cdba8d8419672125be4906e870235e29e4982" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.515521 4698 scope.go:117] "RemoveContainer" containerID="82e93df7e65a6a821e7024f0f7e4837cd3241a9c9ccad476a07e5d3158b42f9c" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.540157 4698 scope.go:117] "RemoveContainer" containerID="9133ec75b5805ccf389499bf4f0284068d8b3b503fa1bb02ea45803836fcbcc7" Oct 14 10:22:08 crc kubenswrapper[4698]: I1014 10:22:08.581384 4698 scope.go:117] "RemoveContainer" containerID="8f50482a824b1752d07f7fd427a302b4bce24b0029bc074ff51d5881f64fd85b" Oct 14 10:22:49 crc kubenswrapper[4698]: I1014 10:22:49.667743 4698 generic.go:334] "Generic (PLEG): container finished" podID="d4559dff-03d5-4c1b-a8df-f8fc0ae935de" containerID="360a90457e0db54c8ec77453be38d36900bdd03dbb92411f0858b317ec613e26" exitCode=0 Oct 14 10:22:49 crc kubenswrapper[4698]: I1014 10:22:49.667790 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" event={"ID":"d4559dff-03d5-4c1b-a8df-f8fc0ae935de","Type":"ContainerDied","Data":"360a90457e0db54c8ec77453be38d36900bdd03dbb92411f0858b317ec613e26"} Oct 14 10:22:49 crc kubenswrapper[4698]: E1014 10:22:49.817096 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4559dff_03d5_4c1b_a8df_f8fc0ae935de.slice/crio-conmon-360a90457e0db54c8ec77453be38d36900bdd03dbb92411f0858b317ec613e26.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.096958 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.184271 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npv2t\" (UniqueName: \"kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t\") pod \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.184369 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory\") pod \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.185036 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle\") pod \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.185099 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key\") pod \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\" (UID: \"d4559dff-03d5-4c1b-a8df-f8fc0ae935de\") " Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.191090 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d4559dff-03d5-4c1b-a8df-f8fc0ae935de" (UID: "d4559dff-03d5-4c1b-a8df-f8fc0ae935de"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.192142 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t" (OuterVolumeSpecName: "kube-api-access-npv2t") pod "d4559dff-03d5-4c1b-a8df-f8fc0ae935de" (UID: "d4559dff-03d5-4c1b-a8df-f8fc0ae935de"). InnerVolumeSpecName "kube-api-access-npv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.216365 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4559dff-03d5-4c1b-a8df-f8fc0ae935de" (UID: "d4559dff-03d5-4c1b-a8df-f8fc0ae935de"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.224831 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory" (OuterVolumeSpecName: "inventory") pod "d4559dff-03d5-4c1b-a8df-f8fc0ae935de" (UID: "d4559dff-03d5-4c1b-a8df-f8fc0ae935de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.287538 4698 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.287603 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.287617 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npv2t\" (UniqueName: \"kubernetes.io/projected/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-kube-api-access-npv2t\") on node \"crc\" DevicePath \"\"" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.287631 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4559dff-03d5-4c1b-a8df-f8fc0ae935de-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.690292 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" event={"ID":"d4559dff-03d5-4c1b-a8df-f8fc0ae935de","Type":"ContainerDied","Data":"1e23e18cd500d926b3e5ec2e9405ae53e3048bd945d2530d33f5ba656d86174f"} Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.690354 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e23e18cd500d926b3e5ec2e9405ae53e3048bd945d2530d33f5ba656d86174f" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.690427 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.799373 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c"] Oct 14 10:22:51 crc kubenswrapper[4698]: E1014 10:22:51.800175 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="registry-server" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800247 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="registry-server" Oct 14 10:22:51 crc kubenswrapper[4698]: E1014 10:22:51.800306 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4559dff-03d5-4c1b-a8df-f8fc0ae935de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800356 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4559dff-03d5-4c1b-a8df-f8fc0ae935de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 10:22:51 crc kubenswrapper[4698]: E1014 10:22:51.800446 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="extract-content" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800496 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="extract-content" Oct 14 10:22:51 crc kubenswrapper[4698]: E1014 10:22:51.800561 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="extract-utilities" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800610 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="extract-utilities" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800888 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4559dff-03d5-4c1b-a8df-f8fc0ae935de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.800971 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75f3484-78a7-4ee4-85b5-fde82b50a6d7" containerName="registry-server" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.801738 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.803929 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.804351 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.804534 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.804911 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.812218 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c"] Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.900424 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8qfk\" (UniqueName: \"kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.900788 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:51 crc kubenswrapper[4698]: I1014 10:22:51.901104 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.002663 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.002739 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8qfk\" (UniqueName: \"kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.002797 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.008317 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.008343 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.027512 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8qfk\" (UniqueName: \"kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zln6c\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.146822 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.653119 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c"] Oct 14 10:22:52 crc kubenswrapper[4698]: I1014 10:22:52.700440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" event={"ID":"0e135199-5913-440f-a291-4252ae734b96","Type":"ContainerStarted","Data":"960978195b984a786554c5a4d422f1ff0d080cc28f9c6a5b9787020ffbce5dea"} Oct 14 10:22:53 crc kubenswrapper[4698]: I1014 10:22:53.714742 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" event={"ID":"0e135199-5913-440f-a291-4252ae734b96","Type":"ContainerStarted","Data":"76f7053e2596144592cfea3f8e26d15d13e3e6374012dbf998e69635ff2e702c"} Oct 14 10:22:53 crc kubenswrapper[4698]: I1014 10:22:53.734258 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" podStartSLOduration=2.201792912 podStartE2EDuration="2.734236265s" podCreationTimestamp="2025-10-14 10:22:51 +0000 UTC" firstStartedPulling="2025-10-14 10:22:52.659549509 +0000 UTC m=+1554.356848945" lastFinishedPulling="2025-10-14 10:22:53.191992862 +0000 UTC m=+1554.889292298" observedRunningTime="2025-10-14 10:22:53.730314523 +0000 UTC m=+1555.427613959" watchObservedRunningTime="2025-10-14 10:22:53.734236265 +0000 UTC m=+1555.431535681" Oct 14 10:23:08 crc kubenswrapper[4698]: I1014 10:23:08.767376 4698 scope.go:117] "RemoveContainer" containerID="a4dd8177274e715b4a3488d7c4166628c3a6b059a00114fa66bd797cbb6e97b5" Oct 14 10:23:08 crc kubenswrapper[4698]: I1014 10:23:08.798925 4698 scope.go:117] "RemoveContainer" containerID="aa4b7f514ddcca9a15b741fa483e1986c9fe9b7b01aa11654ac6b544fbaf8d97" Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.054021 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-g7bp5"] Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.062292 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mbhc5"] Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.071294 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7sv8p"] Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.081500 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-g7bp5"] Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.089979 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mbhc5"] Oct 14 10:23:30 crc kubenswrapper[4698]: I1014 10:23:30.098161 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7sv8p"] Oct 14 10:23:31 crc kubenswrapper[4698]: I1014 10:23:31.033349 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e" path="/var/lib/kubelet/pods/25ae6cb7-176c-4e9d-8c3c-b3bdf54ce71e/volumes" Oct 14 10:23:31 crc kubenswrapper[4698]: I1014 10:23:31.034411 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8480a1-4b59-4b93-8a6f-1c4d71a0b389" path="/var/lib/kubelet/pods/2f8480a1-4b59-4b93-8a6f-1c4d71a0b389/volumes" Oct 14 10:23:31 crc kubenswrapper[4698]: I1014 10:23:31.035462 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ba3855-be67-4359-b8c2-f62f45279695" path="/var/lib/kubelet/pods/57ba3855-be67-4359-b8c2-f62f45279695/volumes" Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.080502 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cefd-account-create-wz2jw"] Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.092233 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ea38-account-create-8crf4"] Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.102426 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b2d0-account-create-7xxwn"] Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.134389 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cefd-account-create-wz2jw"] Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.143544 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ea38-account-create-8crf4"] Oct 14 10:23:48 crc kubenswrapper[4698]: I1014 10:23:48.152820 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b2d0-account-create-7xxwn"] Oct 14 10:23:49 crc kubenswrapper[4698]: I1014 10:23:49.038521 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2d4210-9870-40ae-b7e1-1569f4b92f37" path="/var/lib/kubelet/pods/0c2d4210-9870-40ae-b7e1-1569f4b92f37/volumes" Oct 14 10:23:49 crc kubenswrapper[4698]: I1014 10:23:49.039743 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6fe535-99c6-42e7-80b0-f28a65ab1778" path="/var/lib/kubelet/pods/3c6fe535-99c6-42e7-80b0-f28a65ab1778/volumes" Oct 14 10:23:49 crc kubenswrapper[4698]: I1014 10:23:49.040971 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1" path="/var/lib/kubelet/pods/99bc85e6-dfaa-48d5-9ec9-7ea615ba95e1/volumes" Oct 14 10:23:53 crc kubenswrapper[4698]: I1014 10:23:53.908759 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:23:53 crc kubenswrapper[4698]: I1014 10:23:53.909323 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:24:08 crc kubenswrapper[4698]: I1014 10:24:08.860815 4698 scope.go:117] "RemoveContainer" containerID="02efc2af2b8935ae233da8f34f1583c5629a61462dd3b683a0f47b8f97a12155" Oct 14 10:24:08 crc kubenswrapper[4698]: I1014 10:24:08.892498 4698 scope.go:117] "RemoveContainer" containerID="addd3471a62a7d28439c58402096e7808720bf2d9b76c32706b4ad29d62a77d5" Oct 14 10:24:08 crc kubenswrapper[4698]: I1014 10:24:08.963808 4698 scope.go:117] "RemoveContainer" containerID="3d1530f026b17d36b6a0791719bfc9badeddb63aa7d4e272c2aa81d9647a7c4e" Oct 14 10:24:09 crc kubenswrapper[4698]: I1014 10:24:09.025327 4698 scope.go:117] "RemoveContainer" containerID="aacbe0f99302dc3cd8c8c7403b69f08179a37a7d41a1b3fa01b7ce2e5fa95f69" Oct 14 10:24:09 crc kubenswrapper[4698]: I1014 10:24:09.060045 4698 scope.go:117] "RemoveContainer" containerID="3fe718146bae16c4b54c210f21529865e685a6bcfdba7079c8c8699c0645dfe8" Oct 14 10:24:09 crc kubenswrapper[4698]: I1014 10:24:09.101146 4698 scope.go:117] "RemoveContainer" containerID="364ebefb477fcdc353e748d44e22cab6cf13d97dbd07d8e235687087f87531ab" Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.049073 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-hh2hz"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.068484 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8ccwm"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.080697 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8ccjb"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.091452 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pgwrf"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.099822 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8ccjb"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.106705 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-hh2hz"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.117887 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pgwrf"] Oct 14 10:24:13 crc kubenswrapper[4698]: I1014 10:24:13.140529 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8ccwm"] Oct 14 10:24:15 crc kubenswrapper[4698]: I1014 10:24:15.028224 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7130fceb-fafc-446f-be3a-01d71381b75f" path="/var/lib/kubelet/pods/7130fceb-fafc-446f-be3a-01d71381b75f/volumes" Oct 14 10:24:15 crc kubenswrapper[4698]: I1014 10:24:15.029103 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b518c46-ed97-433e-81ea-457a3e6a19fd" path="/var/lib/kubelet/pods/8b518c46-ed97-433e-81ea-457a3e6a19fd/volumes" Oct 14 10:24:15 crc kubenswrapper[4698]: I1014 10:24:15.029644 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a734e759-8d40-4fd6-a208-93382019256b" path="/var/lib/kubelet/pods/a734e759-8d40-4fd6-a208-93382019256b/volumes" Oct 14 10:24:15 crc kubenswrapper[4698]: I1014 10:24:15.030659 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b63986-3e9d-4741-b06f-43c4932b286b" path="/var/lib/kubelet/pods/e4b63986-3e9d-4741-b06f-43c4932b286b/volumes" Oct 14 10:24:16 crc kubenswrapper[4698]: I1014 10:24:16.050834 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rwbwr"] Oct 14 10:24:16 crc kubenswrapper[4698]: I1014 10:24:16.061877 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rwbwr"] Oct 14 10:24:17 crc kubenswrapper[4698]: I1014 10:24:17.036936 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a71c98-4d2e-4aad-908d-0414cc8db1d7" path="/var/lib/kubelet/pods/53a71c98-4d2e-4aad-908d-0414cc8db1d7/volumes" Oct 14 10:24:18 crc kubenswrapper[4698]: I1014 10:24:18.056180 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mnqnk"] Oct 14 10:24:18 crc kubenswrapper[4698]: I1014 10:24:18.080444 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mnqnk"] Oct 14 10:24:19 crc kubenswrapper[4698]: I1014 10:24:19.027278 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575ff60e-e52b-40bf-8429-ac5c464ed1ce" path="/var/lib/kubelet/pods/575ff60e-e52b-40bf-8429-ac5c464ed1ce/volumes" Oct 14 10:24:23 crc kubenswrapper[4698]: I1014 10:24:23.908440 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:24:23 crc kubenswrapper[4698]: I1014 10:24:23.909028 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:24:26 crc kubenswrapper[4698]: I1014 10:24:26.042319 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18d1-account-create-bjrtb"] Oct 14 10:24:26 crc kubenswrapper[4698]: I1014 10:24:26.055187 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18d1-account-create-bjrtb"] Oct 14 10:24:27 crc kubenswrapper[4698]: I1014 10:24:27.028975 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8ed93f-2876-4954-89f6-e169a445631d" path="/var/lib/kubelet/pods/ae8ed93f-2876-4954-89f6-e169a445631d/volumes" Oct 14 10:24:27 crc kubenswrapper[4698]: I1014 10:24:27.746787 4698 generic.go:334] "Generic (PLEG): container finished" podID="0e135199-5913-440f-a291-4252ae734b96" containerID="76f7053e2596144592cfea3f8e26d15d13e3e6374012dbf998e69635ff2e702c" exitCode=0 Oct 14 10:24:27 crc kubenswrapper[4698]: I1014 10:24:27.746801 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" event={"ID":"0e135199-5913-440f-a291-4252ae734b96","Type":"ContainerDied","Data":"76f7053e2596144592cfea3f8e26d15d13e3e6374012dbf998e69635ff2e702c"} Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.307077 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.425151 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory\") pod \"0e135199-5913-440f-a291-4252ae734b96\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.425847 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key\") pod \"0e135199-5913-440f-a291-4252ae734b96\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.425939 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8qfk\" (UniqueName: \"kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk\") pod \"0e135199-5913-440f-a291-4252ae734b96\" (UID: \"0e135199-5913-440f-a291-4252ae734b96\") " Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.431244 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk" (OuterVolumeSpecName: "kube-api-access-f8qfk") pod "0e135199-5913-440f-a291-4252ae734b96" (UID: "0e135199-5913-440f-a291-4252ae734b96"). InnerVolumeSpecName "kube-api-access-f8qfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.471246 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory" (OuterVolumeSpecName: "inventory") pod "0e135199-5913-440f-a291-4252ae734b96" (UID: "0e135199-5913-440f-a291-4252ae734b96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.477172 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e135199-5913-440f-a291-4252ae734b96" (UID: "0e135199-5913-440f-a291-4252ae734b96"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.529358 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.529393 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8qfk\" (UniqueName: \"kubernetes.io/projected/0e135199-5913-440f-a291-4252ae734b96-kube-api-access-f8qfk\") on node \"crc\" DevicePath \"\"" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.529405 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e135199-5913-440f-a291-4252ae734b96-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.768973 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" event={"ID":"0e135199-5913-440f-a291-4252ae734b96","Type":"ContainerDied","Data":"960978195b984a786554c5a4d422f1ff0d080cc28f9c6a5b9787020ffbce5dea"} Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.769024 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960978195b984a786554c5a4d422f1ff0d080cc28f9c6a5b9787020ffbce5dea" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.769107 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zln6c" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.858273 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb"] Oct 14 10:24:29 crc kubenswrapper[4698]: E1014 10:24:29.858708 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e135199-5913-440f-a291-4252ae734b96" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.858727 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e135199-5913-440f-a291-4252ae734b96" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.858979 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e135199-5913-440f-a291-4252ae734b96" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.859658 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.863342 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.863659 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.863856 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.863857 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.870059 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb"] Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.937103 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.937154 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:29 crc kubenswrapper[4698]: I1014 10:24:29.937274 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8hb\" (UniqueName: \"kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.039731 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.039790 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.039886 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz8hb\" (UniqueName: \"kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.044332 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.046166 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.055608 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz8hb\" (UniqueName: \"kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.228093 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:24:30 crc kubenswrapper[4698]: I1014 10:24:30.807217 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb"] Oct 14 10:24:31 crc kubenswrapper[4698]: I1014 10:24:31.797637 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" event={"ID":"43529126-1bd9-4a80-bf14-99b218ef939c","Type":"ContainerStarted","Data":"887c4dd3ceb302f597417b40ad3ca05599acec0cbefe41c7e5d22574d87d68e8"} Oct 14 10:24:31 crc kubenswrapper[4698]: I1014 10:24:31.798395 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" event={"ID":"43529126-1bd9-4a80-bf14-99b218ef939c","Type":"ContainerStarted","Data":"e35cfd6b6b538829740ef71b8c0f1924eca512206a5103b169cf9ead81d42bd7"} Oct 14 10:24:31 crc kubenswrapper[4698]: I1014 10:24:31.820625 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" podStartSLOduration=2.262744317 podStartE2EDuration="2.820604434s" podCreationTimestamp="2025-10-14 10:24:29 +0000 UTC" firstStartedPulling="2025-10-14 10:24:30.817354584 +0000 UTC m=+1652.514654000" lastFinishedPulling="2025-10-14 10:24:31.375214661 +0000 UTC m=+1653.072514117" observedRunningTime="2025-10-14 10:24:31.817976399 +0000 UTC m=+1653.515275915" watchObservedRunningTime="2025-10-14 10:24:31.820604434 +0000 UTC m=+1653.517903850" Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.040575 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-5da3-account-create-z7lfc"] Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.052034 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-5da3-account-create-z7lfc"] Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.062319 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-83ed-account-create-jdm2f"] Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.070397 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c711-account-create-sh8sx"] Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.078289 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-83ed-account-create-jdm2f"] Oct 14 10:24:32 crc kubenswrapper[4698]: I1014 10:24:32.086054 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c711-account-create-sh8sx"] Oct 14 10:24:33 crc kubenswrapper[4698]: I1014 10:24:33.034317 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce16763-8bd4-4dfc-a5cb-975622a0bb5e" path="/var/lib/kubelet/pods/0ce16763-8bd4-4dfc-a5cb-975622a0bb5e/volumes" Oct 14 10:24:33 crc kubenswrapper[4698]: I1014 10:24:33.036027 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674b9a2a-192d-4f43-b2c8-bfb55a2775fe" path="/var/lib/kubelet/pods/674b9a2a-192d-4f43-b2c8-bfb55a2775fe/volumes" Oct 14 10:24:33 crc kubenswrapper[4698]: I1014 10:24:33.036536 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1dfd964-476d-40ea-942f-0f2ef2a6314f" path="/var/lib/kubelet/pods/b1dfd964-476d-40ea-942f-0f2ef2a6314f/volumes" Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.828500 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4r85p"] Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.831983 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.860301 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r85p"] Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.970191 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-utilities\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.970295 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkps\" (UniqueName: \"kubernetes.io/projected/48bfadf8-08ed-4688-917d-818b9f91abcf-kube-api-access-tgkps\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:34 crc kubenswrapper[4698]: I1014 10:24:34.970393 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-catalog-content\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.072937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-utilities\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.073129 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgkps\" (UniqueName: \"kubernetes.io/projected/48bfadf8-08ed-4688-917d-818b9f91abcf-kube-api-access-tgkps\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.073537 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-catalog-content\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.073699 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-utilities\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.073953 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48bfadf8-08ed-4688-917d-818b9f91abcf-catalog-content\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.105118 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgkps\" (UniqueName: \"kubernetes.io/projected/48bfadf8-08ed-4688-917d-818b9f91abcf-kube-api-access-tgkps\") pod \"redhat-operators-4r85p\" (UID: \"48bfadf8-08ed-4688-917d-818b9f91abcf\") " pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.165299 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.633796 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r85p"] Oct 14 10:24:35 crc kubenswrapper[4698]: I1014 10:24:35.840886 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r85p" event={"ID":"48bfadf8-08ed-4688-917d-818b9f91abcf","Type":"ContainerStarted","Data":"93c2037a5dc9991742ef89cd3851b237b23c182e52d816ca963fb3c02e5f68f0"} Oct 14 10:24:36 crc kubenswrapper[4698]: I1014 10:24:36.855551 4698 generic.go:334] "Generic (PLEG): container finished" podID="48bfadf8-08ed-4688-917d-818b9f91abcf" containerID="b8cc5575e773cfb75bacb514e8ac342b110946976e6d3615d6b90c2b07fe2f7c" exitCode=0 Oct 14 10:24:36 crc kubenswrapper[4698]: I1014 10:24:36.855663 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r85p" event={"ID":"48bfadf8-08ed-4688-917d-818b9f91abcf","Type":"ContainerDied","Data":"b8cc5575e773cfb75bacb514e8ac342b110946976e6d3615d6b90c2b07fe2f7c"} Oct 14 10:24:43 crc kubenswrapper[4698]: I1014 10:24:43.047956 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2qdqz"] Oct 14 10:24:43 crc kubenswrapper[4698]: I1014 10:24:43.059622 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2qdqz"] Oct 14 10:24:45 crc kubenswrapper[4698]: I1014 10:24:45.031394 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d4fb38-f09b-4383-adfc-12bb06107bfb" path="/var/lib/kubelet/pods/19d4fb38-f09b-4383-adfc-12bb06107bfb/volumes" Oct 14 10:24:47 crc kubenswrapper[4698]: I1014 10:24:47.987345 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r85p" event={"ID":"48bfadf8-08ed-4688-917d-818b9f91abcf","Type":"ContainerStarted","Data":"4dba65c9fa0dc985f0f12941a7509f80666fd22363532db3a2ffebcfa24615a1"} Oct 14 10:24:50 crc kubenswrapper[4698]: I1014 10:24:50.038072 4698 generic.go:334] "Generic (PLEG): container finished" podID="48bfadf8-08ed-4688-917d-818b9f91abcf" containerID="4dba65c9fa0dc985f0f12941a7509f80666fd22363532db3a2ffebcfa24615a1" exitCode=0 Oct 14 10:24:50 crc kubenswrapper[4698]: I1014 10:24:50.038165 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r85p" event={"ID":"48bfadf8-08ed-4688-917d-818b9f91abcf","Type":"ContainerDied","Data":"4dba65c9fa0dc985f0f12941a7509f80666fd22363532db3a2ffebcfa24615a1"} Oct 14 10:24:51 crc kubenswrapper[4698]: I1014 10:24:51.050523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r85p" event={"ID":"48bfadf8-08ed-4688-917d-818b9f91abcf","Type":"ContainerStarted","Data":"90b3d413469c9b9cd4eba26dea55e57fa2113cbb525f25f1c099147c62728bde"} Oct 14 10:24:51 crc kubenswrapper[4698]: I1014 10:24:51.079805 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4r85p" podStartSLOduration=3.363552862 podStartE2EDuration="17.079781586s" podCreationTimestamp="2025-10-14 10:24:34 +0000 UTC" firstStartedPulling="2025-10-14 10:24:36.85888101 +0000 UTC m=+1658.556180426" lastFinishedPulling="2025-10-14 10:24:50.575109734 +0000 UTC m=+1672.272409150" observedRunningTime="2025-10-14 10:24:51.069841432 +0000 UTC m=+1672.767140858" watchObservedRunningTime="2025-10-14 10:24:51.079781586 +0000 UTC m=+1672.777081022" Oct 14 10:24:52 crc kubenswrapper[4698]: I1014 10:24:52.057725 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xjff6"] Oct 14 10:24:52 crc kubenswrapper[4698]: I1014 10:24:52.067899 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xjff6"] Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.037605 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30712ba4-9217-4276-b576-798bfd319b45" path="/var/lib/kubelet/pods/30712ba4-9217-4276-b576-798bfd319b45/volumes" Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.908234 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.908329 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.908386 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.908984 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:24:53 crc kubenswrapper[4698]: I1014 10:24:53.909040 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" gracePeriod=600 Oct 14 10:24:54 crc kubenswrapper[4698]: E1014 10:24:54.037197 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:24:54 crc kubenswrapper[4698]: I1014 10:24:54.098418 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" exitCode=0 Oct 14 10:24:54 crc kubenswrapper[4698]: I1014 10:24:54.098470 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851"} Oct 14 10:24:54 crc kubenswrapper[4698]: I1014 10:24:54.098518 4698 scope.go:117] "RemoveContainer" containerID="63010ab0cc5421cf695e29fbbb1f6887fbbb050b898692330d5d62f331b0158a" Oct 14 10:24:54 crc kubenswrapper[4698]: I1014 10:24:54.099410 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:24:54 crc kubenswrapper[4698]: E1014 10:24:54.099883 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:24:55 crc kubenswrapper[4698]: I1014 10:24:55.165591 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:55 crc kubenswrapper[4698]: I1014 10:24:55.165958 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:24:56 crc kubenswrapper[4698]: I1014 10:24:56.216620 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4r85p" podUID="48bfadf8-08ed-4688-917d-818b9f91abcf" containerName="registry-server" probeResult="failure" output=< Oct 14 10:24:56 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:24:56 crc kubenswrapper[4698]: > Oct 14 10:25:05 crc kubenswrapper[4698]: I1014 10:25:05.037098 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9c4js"] Oct 14 10:25:05 crc kubenswrapper[4698]: I1014 10:25:05.046540 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9c4js"] Oct 14 10:25:05 crc kubenswrapper[4698]: I1014 10:25:05.229663 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:25:05 crc kubenswrapper[4698]: I1014 10:25:05.300184 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4r85p" Oct 14 10:25:05 crc kubenswrapper[4698]: I1014 10:25:05.873387 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r85p"] Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.014961 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.015222 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2q44t" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="registry-server" containerID="cri-o://5ebfb750dc4abd2501a3b51c10acb8adb9119acc736bd840f2cb016c64f9bb45" gracePeriod=2 Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.058640 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:25:06 crc kubenswrapper[4698]: E1014 10:25:06.059000 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.224692 4698 generic.go:334] "Generic (PLEG): container finished" podID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerID="5ebfb750dc4abd2501a3b51c10acb8adb9119acc736bd840f2cb016c64f9bb45" exitCode=0 Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.224794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerDied","Data":"5ebfb750dc4abd2501a3b51c10acb8adb9119acc736bd840f2cb016c64f9bb45"} Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.466126 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.592700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities\") pod \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.592936 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content\") pod \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.593010 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfjvx\" (UniqueName: \"kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx\") pod \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\" (UID: \"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01\") " Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.593293 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities" (OuterVolumeSpecName: "utilities") pod "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" (UID: "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.593749 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.598702 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx" (OuterVolumeSpecName: "kube-api-access-qfjvx") pod "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" (UID: "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01"). InnerVolumeSpecName "kube-api-access-qfjvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.695530 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" (UID: "b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.696603 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:06 crc kubenswrapper[4698]: I1014 10:25:06.696638 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfjvx\" (UniqueName: \"kubernetes.io/projected/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01-kube-api-access-qfjvx\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.032872 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2abd1f71-b2d4-4c95-898c-bcfe99b2acf5" path="/var/lib/kubelet/pods/2abd1f71-b2d4-4c95-898c-bcfe99b2acf5/volumes" Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.235457 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q44t" event={"ID":"b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01","Type":"ContainerDied","Data":"598e35ee9aae805a4d54ced6efd192448709628431a5de7e2654d7272168228b"} Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.235545 4698 scope.go:117] "RemoveContainer" containerID="5ebfb750dc4abd2501a3b51c10acb8adb9119acc736bd840f2cb016c64f9bb45" Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.235484 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q44t" Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.271117 4698 scope.go:117] "RemoveContainer" containerID="3f97fb12cfa6569761381253ac201b1e5bbd4e554caa438a8a6e12d5b06f701b" Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.280823 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.295403 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2q44t"] Oct 14 10:25:07 crc kubenswrapper[4698]: I1014 10:25:07.340967 4698 scope.go:117] "RemoveContainer" containerID="aecdd8963aa9ee4ad54beaacd9c9e0243702fcac5ee0f323fa517fc3768bab24" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.030235 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" path="/var/lib/kubelet/pods/b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01/volumes" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.290513 4698 scope.go:117] "RemoveContainer" containerID="e3ba6222cb8d218f88eef3bb6abd80a45ecdec3b016ecb1a8dec4be7fb47dd4b" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.349811 4698 scope.go:117] "RemoveContainer" containerID="3318db8fca5ac380f68901deacccf2d9292c6d18ee6bdda94c5972bcc822501e" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.374558 4698 scope.go:117] "RemoveContainer" containerID="b6109c6401d02c7b196afcba8f862f1ca5650abd7cccb405945e2a4e1737d4d0" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.460000 4698 scope.go:117] "RemoveContainer" containerID="8119cd8db487dac30f61f07f4f6a727ccc21078985996940a5f3e33beb45c984" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.496239 4698 scope.go:117] "RemoveContainer" containerID="7c0ce52fbf295c5fcd33ffb01f39e076b2c095c359acf1596be11ed8efe5e173" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.541058 4698 scope.go:117] "RemoveContainer" containerID="fac04a59e9c34c55331742431bff7be1a62d1bb8f98506c78592735b4822928a" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.594200 4698 scope.go:117] "RemoveContainer" containerID="ab648946264351d69768f3d5b12848c87f0d49bd8222dbc16c71007d44fc23f0" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.629020 4698 scope.go:117] "RemoveContainer" containerID="ebab573d6e1e42ee0c9f9bc8801604653277e7071af495aed2bf2fdcb93bcc82" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.665125 4698 scope.go:117] "RemoveContainer" containerID="bf3474ee1154961d298a74c907031a1ad35c67f5be7bf603ba590b32a6b34985" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.685246 4698 scope.go:117] "RemoveContainer" containerID="51f6b710c2bc8b09c94662c8c7962dc21b14702f93fbe6d5919e832065135ca3" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.727818 4698 scope.go:117] "RemoveContainer" containerID="42bf3c73ba3327efa826fffd6a5a8544f7e45d4ea8dd0cddfc97faf1402c257d" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.765299 4698 scope.go:117] "RemoveContainer" containerID="cd53c12dd7f654493c82f3fdcfb6921fa68fb99468cd6449474b7eefb0b3cd72" Oct 14 10:25:09 crc kubenswrapper[4698]: I1014 10:25:09.785890 4698 scope.go:117] "RemoveContainer" containerID="c1d7114f97e2cc88c1ad000f67176aff33c47e2aebbed4a23c4ab10c754c6022" Oct 14 10:25:19 crc kubenswrapper[4698]: I1014 10:25:19.027156 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:25:19 crc kubenswrapper[4698]: E1014 10:25:19.028137 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:25:22 crc kubenswrapper[4698]: I1014 10:25:22.042779 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-qfgt5"] Oct 14 10:25:22 crc kubenswrapper[4698]: I1014 10:25:22.050878 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-qfgt5"] Oct 14 10:25:23 crc kubenswrapper[4698]: I1014 10:25:23.034064 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b034a777-04ce-4fe1-baf0-7dd68c64b31f" path="/var/lib/kubelet/pods/b034a777-04ce-4fe1-baf0-7dd68c64b31f/volumes" Oct 14 10:25:23 crc kubenswrapper[4698]: I1014 10:25:23.052702 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nbmlr"] Oct 14 10:25:23 crc kubenswrapper[4698]: I1014 10:25:23.064516 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nbmlr"] Oct 14 10:25:25 crc kubenswrapper[4698]: I1014 10:25:25.031825 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9" path="/var/lib/kubelet/pods/94ea6a4b-4e73-4a81-bb25-8ae62bf7daa9/volumes" Oct 14 10:25:25 crc kubenswrapper[4698]: I1014 10:25:25.033917 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-thrh8"] Oct 14 10:25:25 crc kubenswrapper[4698]: I1014 10:25:25.041465 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-thrh8"] Oct 14 10:25:27 crc kubenswrapper[4698]: I1014 10:25:27.036924 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90a3be7-6827-427d-9ed1-3aef79542b6d" path="/var/lib/kubelet/pods/d90a3be7-6827-427d-9ed1-3aef79542b6d/volumes" Oct 14 10:25:32 crc kubenswrapper[4698]: I1014 10:25:32.018945 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:25:32 crc kubenswrapper[4698]: E1014 10:25:32.022041 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:25:44 crc kubenswrapper[4698]: I1014 10:25:44.714837 4698 generic.go:334] "Generic (PLEG): container finished" podID="43529126-1bd9-4a80-bf14-99b218ef939c" containerID="887c4dd3ceb302f597417b40ad3ca05599acec0cbefe41c7e5d22574d87d68e8" exitCode=0 Oct 14 10:25:44 crc kubenswrapper[4698]: I1014 10:25:44.715298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" event={"ID":"43529126-1bd9-4a80-bf14-99b218ef939c","Type":"ContainerDied","Data":"887c4dd3ceb302f597417b40ad3ca05599acec0cbefe41c7e5d22574d87d68e8"} Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.017880 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:25:46 crc kubenswrapper[4698]: E1014 10:25:46.019160 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.224554 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.406495 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz8hb\" (UniqueName: \"kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb\") pod \"43529126-1bd9-4a80-bf14-99b218ef939c\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.407011 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory\") pod \"43529126-1bd9-4a80-bf14-99b218ef939c\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.407362 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key\") pod \"43529126-1bd9-4a80-bf14-99b218ef939c\" (UID: \"43529126-1bd9-4a80-bf14-99b218ef939c\") " Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.415576 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb" (OuterVolumeSpecName: "kube-api-access-xz8hb") pod "43529126-1bd9-4a80-bf14-99b218ef939c" (UID: "43529126-1bd9-4a80-bf14-99b218ef939c"). InnerVolumeSpecName "kube-api-access-xz8hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.445744 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory" (OuterVolumeSpecName: "inventory") pod "43529126-1bd9-4a80-bf14-99b218ef939c" (UID: "43529126-1bd9-4a80-bf14-99b218ef939c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.453643 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43529126-1bd9-4a80-bf14-99b218ef939c" (UID: "43529126-1bd9-4a80-bf14-99b218ef939c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.513008 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.513796 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz8hb\" (UniqueName: \"kubernetes.io/projected/43529126-1bd9-4a80-bf14-99b218ef939c-kube-api-access-xz8hb\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.513827 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43529126-1bd9-4a80-bf14-99b218ef939c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.736174 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" event={"ID":"43529126-1bd9-4a80-bf14-99b218ef939c","Type":"ContainerDied","Data":"e35cfd6b6b538829740ef71b8c0f1924eca512206a5103b169cf9ead81d42bd7"} Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.736223 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35cfd6b6b538829740ef71b8c0f1924eca512206a5103b169cf9ead81d42bd7" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.736324 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.858727 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns"] Oct 14 10:25:46 crc kubenswrapper[4698]: E1014 10:25:46.859145 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="extract-content" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859161 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="extract-content" Oct 14 10:25:46 crc kubenswrapper[4698]: E1014 10:25:46.859182 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="extract-utilities" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859190 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="extract-utilities" Oct 14 10:25:46 crc kubenswrapper[4698]: E1014 10:25:46.859210 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="registry-server" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859217 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="registry-server" Oct 14 10:25:46 crc kubenswrapper[4698]: E1014 10:25:46.859246 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43529126-1bd9-4a80-bf14-99b218ef939c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859255 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="43529126-1bd9-4a80-bf14-99b218ef939c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859440 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f2ecea-7bd2-4f73-84c5-16b5e65a0d01" containerName="registry-server" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.859463 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="43529126-1bd9-4a80-bf14-99b218ef939c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.860098 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.862796 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.866572 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.867074 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.867178 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.875302 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns"] Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.921315 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.921446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:46 crc kubenswrapper[4698]: I1014 10:25:46.921491 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccrjp\" (UniqueName: \"kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.022529 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.022655 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.022678 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccrjp\" (UniqueName: \"kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.030895 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.038568 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.041618 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccrjp\" (UniqueName: \"kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7ngns\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.202456 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:47 crc kubenswrapper[4698]: I1014 10:25:47.802167 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns"] Oct 14 10:25:48 crc kubenswrapper[4698]: I1014 10:25:48.758600 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" event={"ID":"fc38db3e-e819-4f43-a14a-c83162ceb5fa","Type":"ContainerStarted","Data":"d5e75374b1f46c2c1b7991614608a09cae50cc626c88bbe01c623c01248516d1"} Oct 14 10:25:48 crc kubenswrapper[4698]: I1014 10:25:48.759016 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" event={"ID":"fc38db3e-e819-4f43-a14a-c83162ceb5fa","Type":"ContainerStarted","Data":"679b5255d29bc3d18180cf8e9328b86f6b67f6abb3e38d27353a86d39f4c67b3"} Oct 14 10:25:48 crc kubenswrapper[4698]: I1014 10:25:48.787185 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" podStartSLOduration=2.303442768 podStartE2EDuration="2.787160842s" podCreationTimestamp="2025-10-14 10:25:46 +0000 UTC" firstStartedPulling="2025-10-14 10:25:47.815480804 +0000 UTC m=+1729.512780220" lastFinishedPulling="2025-10-14 10:25:48.299198848 +0000 UTC m=+1729.996498294" observedRunningTime="2025-10-14 10:25:48.781574024 +0000 UTC m=+1730.478873450" watchObservedRunningTime="2025-10-14 10:25:48.787160842 +0000 UTC m=+1730.484460268" Oct 14 10:25:53 crc kubenswrapper[4698]: I1014 10:25:53.824543 4698 generic.go:334] "Generic (PLEG): container finished" podID="fc38db3e-e819-4f43-a14a-c83162ceb5fa" containerID="d5e75374b1f46c2c1b7991614608a09cae50cc626c88bbe01c623c01248516d1" exitCode=0 Oct 14 10:25:53 crc kubenswrapper[4698]: I1014 10:25:53.824617 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" event={"ID":"fc38db3e-e819-4f43-a14a-c83162ceb5fa","Type":"ContainerDied","Data":"d5e75374b1f46c2c1b7991614608a09cae50cc626c88bbe01c623c01248516d1"} Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.278072 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.413222 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccrjp\" (UniqueName: \"kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp\") pod \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.413291 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key\") pod \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.413379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory\") pod \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\" (UID: \"fc38db3e-e819-4f43-a14a-c83162ceb5fa\") " Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.422046 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp" (OuterVolumeSpecName: "kube-api-access-ccrjp") pod "fc38db3e-e819-4f43-a14a-c83162ceb5fa" (UID: "fc38db3e-e819-4f43-a14a-c83162ceb5fa"). InnerVolumeSpecName "kube-api-access-ccrjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.457982 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory" (OuterVolumeSpecName: "inventory") pod "fc38db3e-e819-4f43-a14a-c83162ceb5fa" (UID: "fc38db3e-e819-4f43-a14a-c83162ceb5fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.482568 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fc38db3e-e819-4f43-a14a-c83162ceb5fa" (UID: "fc38db3e-e819-4f43-a14a-c83162ceb5fa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.515836 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccrjp\" (UniqueName: \"kubernetes.io/projected/fc38db3e-e819-4f43-a14a-c83162ceb5fa-kube-api-access-ccrjp\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.515880 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.515891 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc38db3e-e819-4f43-a14a-c83162ceb5fa-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.849676 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" event={"ID":"fc38db3e-e819-4f43-a14a-c83162ceb5fa","Type":"ContainerDied","Data":"679b5255d29bc3d18180cf8e9328b86f6b67f6abb3e38d27353a86d39f4c67b3"} Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.849724 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679b5255d29bc3d18180cf8e9328b86f6b67f6abb3e38d27353a86d39f4c67b3" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.849752 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7ngns" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.935518 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q"] Oct 14 10:25:55 crc kubenswrapper[4698]: E1014 10:25:55.935970 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc38db3e-e819-4f43-a14a-c83162ceb5fa" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.935989 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc38db3e-e819-4f43-a14a-c83162ceb5fa" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.936200 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc38db3e-e819-4f43-a14a-c83162ceb5fa" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.947400 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.952007 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.952289 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.952306 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.952944 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:25:55 crc kubenswrapper[4698]: I1014 10:25:55.991868 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q"] Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.129471 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7dzr\" (UniqueName: \"kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.129554 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.129655 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.231649 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7dzr\" (UniqueName: \"kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.231702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.231746 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.252407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.252407 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.267505 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7dzr\" (UniqueName: \"kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6hw2q\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.273962 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.794673 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q"] Oct 14 10:25:56 crc kubenswrapper[4698]: I1014 10:25:56.861023 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" event={"ID":"06e79464-f4ba-47d3-a98d-d75709932309","Type":"ContainerStarted","Data":"579b5ad1bcf8aabf6f9b53c414810751fc582fd90eb1ec0357e13418a00c6914"} Oct 14 10:25:57 crc kubenswrapper[4698]: I1014 10:25:57.017036 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:25:57 crc kubenswrapper[4698]: E1014 10:25:57.017316 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:25:57 crc kubenswrapper[4698]: I1014 10:25:57.873569 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" event={"ID":"06e79464-f4ba-47d3-a98d-d75709932309","Type":"ContainerStarted","Data":"613e05814879ee7842d5a16499adceefb70ca7ee89866a8984cf5dd204c202ab"} Oct 14 10:25:57 crc kubenswrapper[4698]: I1014 10:25:57.902264 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" podStartSLOduration=2.259190336 podStartE2EDuration="2.902242334s" podCreationTimestamp="2025-10-14 10:25:55 +0000 UTC" firstStartedPulling="2025-10-14 10:25:56.806990935 +0000 UTC m=+1738.504290361" lastFinishedPulling="2025-10-14 10:25:57.450042943 +0000 UTC m=+1739.147342359" observedRunningTime="2025-10-14 10:25:57.895456912 +0000 UTC m=+1739.592756338" watchObservedRunningTime="2025-10-14 10:25:57.902242334 +0000 UTC m=+1739.599541750" Oct 14 10:26:09 crc kubenswrapper[4698]: I1014 10:26:09.031137 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:26:09 crc kubenswrapper[4698]: E1014 10:26:09.031924 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:26:10 crc kubenswrapper[4698]: I1014 10:26:10.091542 4698 scope.go:117] "RemoveContainer" containerID="0b854e12c3d3df6ea816a435c6f8734727bf7005c4f78c7015d0ffd8ea25cf1a" Oct 14 10:26:10 crc kubenswrapper[4698]: I1014 10:26:10.136461 4698 scope.go:117] "RemoveContainer" containerID="6fd413d50ddc394ba6745ff4d2dadf33650b7992176640e1fd078ba7836add91" Oct 14 10:26:10 crc kubenswrapper[4698]: I1014 10:26:10.214507 4698 scope.go:117] "RemoveContainer" containerID="516748dc080d0d3ca18d60b1377deb6295eabfe3258bff566e3488259b4398e8" Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.072952 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-crcb9"] Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.087630 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kfnck"] Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.111015 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7v7zn"] Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.132909 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-crcb9"] Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.145463 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7v7zn"] Oct 14 10:26:17 crc kubenswrapper[4698]: I1014 10:26:17.153390 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kfnck"] Oct 14 10:26:19 crc kubenswrapper[4698]: I1014 10:26:19.032321 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67152ffa-66bb-42a2-b1f9-1e350372431b" path="/var/lib/kubelet/pods/67152ffa-66bb-42a2-b1f9-1e350372431b/volumes" Oct 14 10:26:19 crc kubenswrapper[4698]: I1014 10:26:19.033241 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874aab62-ca3a-45a9-9e34-5527a0c2ee80" path="/var/lib/kubelet/pods/874aab62-ca3a-45a9-9e34-5527a0c2ee80/volumes" Oct 14 10:26:19 crc kubenswrapper[4698]: I1014 10:26:19.034151 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e" path="/var/lib/kubelet/pods/92d5ff6a-b4d9-4d75-ab8b-1ab6808bb45e/volumes" Oct 14 10:26:23 crc kubenswrapper[4698]: I1014 10:26:23.017976 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:26:23 crc kubenswrapper[4698]: E1014 10:26:23.020344 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.037426 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4c9f-account-create-7bztt"] Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.047069 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-11fc-account-create-fk5wc"] Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.055513 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4705-account-create-6pcpq"] Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.063813 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-11fc-account-create-fk5wc"] Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.070457 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4c9f-account-create-7bztt"] Oct 14 10:26:27 crc kubenswrapper[4698]: I1014 10:26:27.077693 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4705-account-create-6pcpq"] Oct 14 10:26:29 crc kubenswrapper[4698]: I1014 10:26:29.041054 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8246a862-8262-4666-a47d-02815d416c74" path="/var/lib/kubelet/pods/8246a862-8262-4666-a47d-02815d416c74/volumes" Oct 14 10:26:29 crc kubenswrapper[4698]: I1014 10:26:29.042651 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adb550b-8d23-4ab3-b120-7640a29e36a5" path="/var/lib/kubelet/pods/9adb550b-8d23-4ab3-b120-7640a29e36a5/volumes" Oct 14 10:26:29 crc kubenswrapper[4698]: I1014 10:26:29.044041 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60a1dd8-3277-4ce0-8c70-706171496794" path="/var/lib/kubelet/pods/d60a1dd8-3277-4ce0-8c70-706171496794/volumes" Oct 14 10:26:37 crc kubenswrapper[4698]: I1014 10:26:37.017749 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:26:37 crc kubenswrapper[4698]: E1014 10:26:37.019176 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:26:38 crc kubenswrapper[4698]: I1014 10:26:38.351096 4698 generic.go:334] "Generic (PLEG): container finished" podID="06e79464-f4ba-47d3-a98d-d75709932309" containerID="613e05814879ee7842d5a16499adceefb70ca7ee89866a8984cf5dd204c202ab" exitCode=0 Oct 14 10:26:38 crc kubenswrapper[4698]: I1014 10:26:38.351182 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" event={"ID":"06e79464-f4ba-47d3-a98d-d75709932309","Type":"ContainerDied","Data":"613e05814879ee7842d5a16499adceefb70ca7ee89866a8984cf5dd204c202ab"} Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.761202 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.879510 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory\") pod \"06e79464-f4ba-47d3-a98d-d75709932309\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.879717 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7dzr\" (UniqueName: \"kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr\") pod \"06e79464-f4ba-47d3-a98d-d75709932309\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.879823 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key\") pod \"06e79464-f4ba-47d3-a98d-d75709932309\" (UID: \"06e79464-f4ba-47d3-a98d-d75709932309\") " Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.886385 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr" (OuterVolumeSpecName: "kube-api-access-w7dzr") pod "06e79464-f4ba-47d3-a98d-d75709932309" (UID: "06e79464-f4ba-47d3-a98d-d75709932309"). InnerVolumeSpecName "kube-api-access-w7dzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.919086 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory" (OuterVolumeSpecName: "inventory") pod "06e79464-f4ba-47d3-a98d-d75709932309" (UID: "06e79464-f4ba-47d3-a98d-d75709932309"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.937362 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06e79464-f4ba-47d3-a98d-d75709932309" (UID: "06e79464-f4ba-47d3-a98d-d75709932309"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.982417 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7dzr\" (UniqueName: \"kubernetes.io/projected/06e79464-f4ba-47d3-a98d-d75709932309-kube-api-access-w7dzr\") on node \"crc\" DevicePath \"\"" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.982760 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:26:39 crc kubenswrapper[4698]: I1014 10:26:39.982900 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06e79464-f4ba-47d3-a98d-d75709932309-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.374711 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" event={"ID":"06e79464-f4ba-47d3-a98d-d75709932309","Type":"ContainerDied","Data":"579b5ad1bcf8aabf6f9b53c414810751fc582fd90eb1ec0357e13418a00c6914"} Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.374776 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="579b5ad1bcf8aabf6f9b53c414810751fc582fd90eb1ec0357e13418a00c6914" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.374817 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6hw2q" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.487817 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx"] Oct 14 10:26:40 crc kubenswrapper[4698]: E1014 10:26:40.488362 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e79464-f4ba-47d3-a98d-d75709932309" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.488392 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e79464-f4ba-47d3-a98d-d75709932309" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.488653 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e79464-f4ba-47d3-a98d-d75709932309" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.489571 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.492297 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.492702 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.493232 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.497019 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.505476 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx"] Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.597050 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lks\" (UniqueName: \"kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.597136 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.597459 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.699644 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8lks\" (UniqueName: \"kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.699946 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.700073 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.705556 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.712868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.729054 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8lks\" (UniqueName: \"kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:40 crc kubenswrapper[4698]: I1014 10:26:40.810870 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:26:41 crc kubenswrapper[4698]: I1014 10:26:41.372408 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx"] Oct 14 10:26:41 crc kubenswrapper[4698]: I1014 10:26:41.387836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" event={"ID":"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f","Type":"ContainerStarted","Data":"d1340e513c14d2ff3c17b645f7fc9f96421722398480a55a35e69da460d2f0a5"} Oct 14 10:26:42 crc kubenswrapper[4698]: I1014 10:26:42.397403 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" event={"ID":"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f","Type":"ContainerStarted","Data":"e94303b6bbe2fcabb0381e4eb242481296419c294bccf2bbee75528d35fd81e9"} Oct 14 10:26:42 crc kubenswrapper[4698]: I1014 10:26:42.422283 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" podStartSLOduration=1.950695979 podStartE2EDuration="2.422260918s" podCreationTimestamp="2025-10-14 10:26:40 +0000 UTC" firstStartedPulling="2025-10-14 10:26:41.377558522 +0000 UTC m=+1783.074857938" lastFinishedPulling="2025-10-14 10:26:41.849123451 +0000 UTC m=+1783.546422877" observedRunningTime="2025-10-14 10:26:42.414078727 +0000 UTC m=+1784.111378173" watchObservedRunningTime="2025-10-14 10:26:42.422260918 +0000 UTC m=+1784.119560354" Oct 14 10:26:49 crc kubenswrapper[4698]: I1014 10:26:49.025915 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:26:49 crc kubenswrapper[4698]: E1014 10:26:49.026982 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:26:49 crc kubenswrapper[4698]: I1014 10:26:49.068375 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dhgcp"] Oct 14 10:26:49 crc kubenswrapper[4698]: I1014 10:26:49.077899 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dhgcp"] Oct 14 10:26:51 crc kubenswrapper[4698]: I1014 10:26:51.029366 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f7e7ff-7e35-433a-a39f-556b716eaf21" path="/var/lib/kubelet/pods/53f7e7ff-7e35-433a-a39f-556b716eaf21/volumes" Oct 14 10:27:00 crc kubenswrapper[4698]: I1014 10:27:00.017887 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:27:00 crc kubenswrapper[4698]: E1014 10:27:00.018898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:27:08 crc kubenswrapper[4698]: I1014 10:27:08.048382 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xsm4"] Oct 14 10:27:08 crc kubenswrapper[4698]: I1014 10:27:08.056558 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wlp"] Oct 14 10:27:08 crc kubenswrapper[4698]: I1014 10:27:08.064881 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xsm4"] Oct 14 10:27:08 crc kubenswrapper[4698]: I1014 10:27:08.071833 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wlp"] Oct 14 10:27:09 crc kubenswrapper[4698]: I1014 10:27:09.063408 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80807b13-41b4-4c40-9acb-a84851f3595f" path="/var/lib/kubelet/pods/80807b13-41b4-4c40-9acb-a84851f3595f/volumes" Oct 14 10:27:09 crc kubenswrapper[4698]: I1014 10:27:09.067672 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2cd329-5063-4f5b-8903-02fdfce19aca" path="/var/lib/kubelet/pods/cb2cd329-5063-4f5b-8903-02fdfce19aca/volumes" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.317574 4698 scope.go:117] "RemoveContainer" containerID="97c4e4d29c7413be4428f6be1cdcf31582102328cea19b8d6e75c02aa4027b8d" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.361358 4698 scope.go:117] "RemoveContainer" containerID="d1372aed996a809cb62d5d55afbb796697fddd14ec5345d0ec5231e3afbb8435" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.423970 4698 scope.go:117] "RemoveContainer" containerID="154ad7e03ab5e052963bdd224a7429e79f33c4338f9986ef44c43612e30c5334" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.492475 4698 scope.go:117] "RemoveContainer" containerID="5c1066f88fb585f884296272a7048feeebf77a1f3f7e3dce70e07b04d1be0841" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.557510 4698 scope.go:117] "RemoveContainer" containerID="a8ce9e0c5b06287b9c791ea486587d18fb2bf613b1875753a8b2b8eb22a93538" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.593795 4698 scope.go:117] "RemoveContainer" containerID="2985d6c6d98ec7a0d58b99ee1eb3c7ee8b4e401f34d6694992bbc5fdb7d331f0" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.622222 4698 scope.go:117] "RemoveContainer" containerID="f7fa8866ebf19f71ff0cd89e6833d8a263efa5ee516dfa3dbc8377962bc320c1" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.645939 4698 scope.go:117] "RemoveContainer" containerID="9822da67473007e557874afff68b151887144fb73ce6c21a5dc2d59dedbe6b2a" Oct 14 10:27:10 crc kubenswrapper[4698]: I1014 10:27:10.683623 4698 scope.go:117] "RemoveContainer" containerID="3a9edb6b1991449d9568664cd3002f692cc3dc57da901729986975ec1b9d5367" Oct 14 10:27:12 crc kubenswrapper[4698]: I1014 10:27:12.017678 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:27:12 crc kubenswrapper[4698]: E1014 10:27:12.018618 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:27:24 crc kubenswrapper[4698]: I1014 10:27:24.018008 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:27:24 crc kubenswrapper[4698]: E1014 10:27:24.019519 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:27:37 crc kubenswrapper[4698]: I1014 10:27:37.018754 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:27:37 crc kubenswrapper[4698]: E1014 10:27:37.019965 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:27:38 crc kubenswrapper[4698]: I1014 10:27:38.000426 4698 generic.go:334] "Generic (PLEG): container finished" podID="57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" containerID="e94303b6bbe2fcabb0381e4eb242481296419c294bccf2bbee75528d35fd81e9" exitCode=2 Oct 14 10:27:38 crc kubenswrapper[4698]: I1014 10:27:38.000491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" event={"ID":"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f","Type":"ContainerDied","Data":"e94303b6bbe2fcabb0381e4eb242481296419c294bccf2bbee75528d35fd81e9"} Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.489647 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.651440 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key\") pod \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.651826 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory\") pod \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.651938 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8lks\" (UniqueName: \"kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks\") pod \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\" (UID: \"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f\") " Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.659799 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks" (OuterVolumeSpecName: "kube-api-access-p8lks") pod "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" (UID: "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f"). InnerVolumeSpecName "kube-api-access-p8lks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.687497 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" (UID: "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.706382 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory" (OuterVolumeSpecName: "inventory") pod "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" (UID: "57bb4dc3-77b1-43e2-9360-c2f0d7354f4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.754726 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.754784 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:27:39 crc kubenswrapper[4698]: I1014 10:27:39.754800 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8lks\" (UniqueName: \"kubernetes.io/projected/57bb4dc3-77b1-43e2-9360-c2f0d7354f4f-kube-api-access-p8lks\") on node \"crc\" DevicePath \"\"" Oct 14 10:27:40 crc kubenswrapper[4698]: I1014 10:27:40.029300 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" event={"ID":"57bb4dc3-77b1-43e2-9360-c2f0d7354f4f","Type":"ContainerDied","Data":"d1340e513c14d2ff3c17b645f7fc9f96421722398480a55a35e69da460d2f0a5"} Oct 14 10:27:40 crc kubenswrapper[4698]: I1014 10:27:40.029625 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1340e513c14d2ff3c17b645f7fc9f96421722398480a55a35e69da460d2f0a5" Oct 14 10:27:40 crc kubenswrapper[4698]: I1014 10:27:40.029361 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.037506 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4"] Oct 14 10:27:47 crc kubenswrapper[4698]: E1014 10:27:47.038981 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.039013 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.039425 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bb4dc3-77b1-43e2-9360-c2f0d7354f4f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.040568 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.044726 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.045431 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.045874 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.046284 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.050279 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4"] Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.224311 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.224414 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.224449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkq7\" (UniqueName: \"kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.326352 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.326444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dkq7\" (UniqueName: \"kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.326710 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.335210 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.341420 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.349964 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dkq7\" (UniqueName: \"kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.375398 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.952910 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4"] Oct 14 10:27:47 crc kubenswrapper[4698]: I1014 10:27:47.958988 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:27:48 crc kubenswrapper[4698]: I1014 10:27:48.114801 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" event={"ID":"601bc78a-d499-4391-ada7-44e34c35c547","Type":"ContainerStarted","Data":"fe8166ad85cf5e595df81127cbec1d7f5966348630359ecb9c2fb1b005fdbb0a"} Oct 14 10:27:49 crc kubenswrapper[4698]: I1014 10:27:49.023650 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:27:49 crc kubenswrapper[4698]: E1014 10:27:49.024344 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:27:49 crc kubenswrapper[4698]: I1014 10:27:49.137298 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" event={"ID":"601bc78a-d499-4391-ada7-44e34c35c547","Type":"ContainerStarted","Data":"f242257a036a13fa8789f893e15fa318ab3e35f23ad5d062f80b878c661252cb"} Oct 14 10:27:49 crc kubenswrapper[4698]: I1014 10:27:49.159334 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" podStartSLOduration=1.665817203 podStartE2EDuration="2.159309353s" podCreationTimestamp="2025-10-14 10:27:47 +0000 UTC" firstStartedPulling="2025-10-14 10:27:47.958491244 +0000 UTC m=+1849.655790660" lastFinishedPulling="2025-10-14 10:27:48.451983384 +0000 UTC m=+1850.149282810" observedRunningTime="2025-10-14 10:27:49.157481851 +0000 UTC m=+1850.854781267" watchObservedRunningTime="2025-10-14 10:27:49.159309353 +0000 UTC m=+1850.856608759" Oct 14 10:27:52 crc kubenswrapper[4698]: I1014 10:27:52.062741 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xhnxp"] Oct 14 10:27:52 crc kubenswrapper[4698]: I1014 10:27:52.075894 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xhnxp"] Oct 14 10:27:53 crc kubenswrapper[4698]: I1014 10:27:53.035451 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9d031c-8581-44fa-b804-fbbc75403d88" path="/var/lib/kubelet/pods/cd9d031c-8581-44fa-b804-fbbc75403d88/volumes" Oct 14 10:28:00 crc kubenswrapper[4698]: I1014 10:28:00.017668 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:28:00 crc kubenswrapper[4698]: E1014 10:28:00.019522 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:28:10 crc kubenswrapper[4698]: I1014 10:28:10.915609 4698 scope.go:117] "RemoveContainer" containerID="019be955801c29102debe6e56c82e001c62e15b86355620f3e70148397d1e2a4" Oct 14 10:28:13 crc kubenswrapper[4698]: I1014 10:28:13.017405 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:28:13 crc kubenswrapper[4698]: E1014 10:28:13.018095 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:28:25 crc kubenswrapper[4698]: I1014 10:28:25.017066 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:28:25 crc kubenswrapper[4698]: E1014 10:28:25.018005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:28:36 crc kubenswrapper[4698]: I1014 10:28:36.608301 4698 generic.go:334] "Generic (PLEG): container finished" podID="601bc78a-d499-4391-ada7-44e34c35c547" containerID="f242257a036a13fa8789f893e15fa318ab3e35f23ad5d062f80b878c661252cb" exitCode=0 Oct 14 10:28:36 crc kubenswrapper[4698]: I1014 10:28:36.608523 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" event={"ID":"601bc78a-d499-4391-ada7-44e34c35c547","Type":"ContainerDied","Data":"f242257a036a13fa8789f893e15fa318ab3e35f23ad5d062f80b878c661252cb"} Oct 14 10:28:37 crc kubenswrapper[4698]: I1014 10:28:37.018167 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:28:37 crc kubenswrapper[4698]: E1014 10:28:37.019225 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.086518 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.266191 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dkq7\" (UniqueName: \"kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7\") pod \"601bc78a-d499-4391-ada7-44e34c35c547\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.266263 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key\") pod \"601bc78a-d499-4391-ada7-44e34c35c547\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.266375 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory\") pod \"601bc78a-d499-4391-ada7-44e34c35c547\" (UID: \"601bc78a-d499-4391-ada7-44e34c35c547\") " Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.272008 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7" (OuterVolumeSpecName: "kube-api-access-4dkq7") pod "601bc78a-d499-4391-ada7-44e34c35c547" (UID: "601bc78a-d499-4391-ada7-44e34c35c547"). InnerVolumeSpecName "kube-api-access-4dkq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.296193 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory" (OuterVolumeSpecName: "inventory") pod "601bc78a-d499-4391-ada7-44e34c35c547" (UID: "601bc78a-d499-4391-ada7-44e34c35c547"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.295744 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "601bc78a-d499-4391-ada7-44e34c35c547" (UID: "601bc78a-d499-4391-ada7-44e34c35c547"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.368810 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.368853 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dkq7\" (UniqueName: \"kubernetes.io/projected/601bc78a-d499-4391-ada7-44e34c35c547-kube-api-access-4dkq7\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.368868 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601bc78a-d499-4391-ada7-44e34c35c547-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.630683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" event={"ID":"601bc78a-d499-4391-ada7-44e34c35c547","Type":"ContainerDied","Data":"fe8166ad85cf5e595df81127cbec1d7f5966348630359ecb9c2fb1b005fdbb0a"} Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.630732 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8166ad85cf5e595df81127cbec1d7f5966348630359ecb9c2fb1b005fdbb0a" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.630893 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.857162 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-49gfb"] Oct 14 10:28:38 crc kubenswrapper[4698]: E1014 10:28:38.857702 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601bc78a-d499-4391-ada7-44e34c35c547" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.857725 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="601bc78a-d499-4391-ada7-44e34c35c547" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.857978 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="601bc78a-d499-4391-ada7-44e34c35c547" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.858817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.861415 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.864582 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.864757 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.867228 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.890914 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-49gfb"] Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.987984 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dr5\" (UniqueName: \"kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.988580 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:38 crc kubenswrapper[4698]: I1014 10:28:38.988933 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.091048 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.091180 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.091262 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dr5\" (UniqueName: \"kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.097562 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.110341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.114859 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dr5\" (UniqueName: \"kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5\") pod \"ssh-known-hosts-edpm-deployment-49gfb\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.192925 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:39 crc kubenswrapper[4698]: I1014 10:28:39.790998 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-49gfb"] Oct 14 10:28:40 crc kubenswrapper[4698]: I1014 10:28:40.655322 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" event={"ID":"86317787-19aa-4ea7-a4ff-3e604d9c0497","Type":"ContainerStarted","Data":"606df5f5320dfcceb0e26688ec89cb292dcc10f154a0a3a7402d4ca7ebcfbae7"} Oct 14 10:28:40 crc kubenswrapper[4698]: I1014 10:28:40.655666 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" event={"ID":"86317787-19aa-4ea7-a4ff-3e604d9c0497","Type":"ContainerStarted","Data":"5ec32795c71545f13dee0249cdd1f8b190521457408097fc6b06a4e7c62f1229"} Oct 14 10:28:40 crc kubenswrapper[4698]: I1014 10:28:40.681407 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" podStartSLOduration=2.11261497 podStartE2EDuration="2.681380483s" podCreationTimestamp="2025-10-14 10:28:38 +0000 UTC" firstStartedPulling="2025-10-14 10:28:39.800406935 +0000 UTC m=+1901.497706391" lastFinishedPulling="2025-10-14 10:28:40.369172488 +0000 UTC m=+1902.066471904" observedRunningTime="2025-10-14 10:28:40.680794776 +0000 UTC m=+1902.378094212" watchObservedRunningTime="2025-10-14 10:28:40.681380483 +0000 UTC m=+1902.378679929" Oct 14 10:28:48 crc kubenswrapper[4698]: I1014 10:28:48.743362 4698 generic.go:334] "Generic (PLEG): container finished" podID="86317787-19aa-4ea7-a4ff-3e604d9c0497" containerID="606df5f5320dfcceb0e26688ec89cb292dcc10f154a0a3a7402d4ca7ebcfbae7" exitCode=0 Oct 14 10:28:48 crc kubenswrapper[4698]: I1014 10:28:48.743475 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" event={"ID":"86317787-19aa-4ea7-a4ff-3e604d9c0497","Type":"ContainerDied","Data":"606df5f5320dfcceb0e26688ec89cb292dcc10f154a0a3a7402d4ca7ebcfbae7"} Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.194464 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.289707 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam\") pod \"86317787-19aa-4ea7-a4ff-3e604d9c0497\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.289840 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dr5\" (UniqueName: \"kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5\") pod \"86317787-19aa-4ea7-a4ff-3e604d9c0497\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.289929 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0\") pod \"86317787-19aa-4ea7-a4ff-3e604d9c0497\" (UID: \"86317787-19aa-4ea7-a4ff-3e604d9c0497\") " Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.296333 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5" (OuterVolumeSpecName: "kube-api-access-s2dr5") pod "86317787-19aa-4ea7-a4ff-3e604d9c0497" (UID: "86317787-19aa-4ea7-a4ff-3e604d9c0497"). InnerVolumeSpecName "kube-api-access-s2dr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.321399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "86317787-19aa-4ea7-a4ff-3e604d9c0497" (UID: "86317787-19aa-4ea7-a4ff-3e604d9c0497"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.331558 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86317787-19aa-4ea7-a4ff-3e604d9c0497" (UID: "86317787-19aa-4ea7-a4ff-3e604d9c0497"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.392611 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.392641 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dr5\" (UniqueName: \"kubernetes.io/projected/86317787-19aa-4ea7-a4ff-3e604d9c0497-kube-api-access-s2dr5\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.392651 4698 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/86317787-19aa-4ea7-a4ff-3e604d9c0497-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.761734 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" event={"ID":"86317787-19aa-4ea7-a4ff-3e604d9c0497","Type":"ContainerDied","Data":"5ec32795c71545f13dee0249cdd1f8b190521457408097fc6b06a4e7c62f1229"} Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.761808 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec32795c71545f13dee0249cdd1f8b190521457408097fc6b06a4e7c62f1229" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.761881 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-49gfb" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.847381 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794"] Oct 14 10:28:50 crc kubenswrapper[4698]: E1014 10:28:50.847868 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86317787-19aa-4ea7-a4ff-3e604d9c0497" containerName="ssh-known-hosts-edpm-deployment" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.847884 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="86317787-19aa-4ea7-a4ff-3e604d9c0497" containerName="ssh-known-hosts-edpm-deployment" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.848097 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="86317787-19aa-4ea7-a4ff-3e604d9c0497" containerName="ssh-known-hosts-edpm-deployment" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.848843 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.850909 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.850915 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.851155 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.851400 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.858384 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794"] Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.905544 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.905627 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcmb\" (UniqueName: \"kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:50 crc kubenswrapper[4698]: I1014 10:28:50.905825 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.009115 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.009340 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcmb\" (UniqueName: \"kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.009493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.014899 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.015254 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.019403 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:28:51 crc kubenswrapper[4698]: E1014 10:28:51.019737 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.040073 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcmb\" (UniqueName: \"kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xc794\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.170202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:28:51 crc kubenswrapper[4698]: I1014 10:28:51.783109 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794"] Oct 14 10:28:52 crc kubenswrapper[4698]: I1014 10:28:52.785050 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" event={"ID":"374455db-3111-424a-82eb-0960266ac879","Type":"ContainerStarted","Data":"c756e2ff8bafb003b0f6693667be5ad7fbc71583df3bcde2cc06fbb4ecb97605"} Oct 14 10:28:52 crc kubenswrapper[4698]: I1014 10:28:52.785103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" event={"ID":"374455db-3111-424a-82eb-0960266ac879","Type":"ContainerStarted","Data":"cfcbe5532de463718305b0fb8376ff2edc52afc58c3f0b4f62630e6be0a2d97f"} Oct 14 10:28:52 crc kubenswrapper[4698]: I1014 10:28:52.820689 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" podStartSLOduration=2.379634696 podStartE2EDuration="2.820668461s" podCreationTimestamp="2025-10-14 10:28:50 +0000 UTC" firstStartedPulling="2025-10-14 10:28:51.789893288 +0000 UTC m=+1913.487192744" lastFinishedPulling="2025-10-14 10:28:52.230927083 +0000 UTC m=+1913.928226509" observedRunningTime="2025-10-14 10:28:52.810470922 +0000 UTC m=+1914.507770338" watchObservedRunningTime="2025-10-14 10:28:52.820668461 +0000 UTC m=+1914.517967877" Oct 14 10:29:00 crc kubenswrapper[4698]: I1014 10:29:00.853012 4698 generic.go:334] "Generic (PLEG): container finished" podID="374455db-3111-424a-82eb-0960266ac879" containerID="c756e2ff8bafb003b0f6693667be5ad7fbc71583df3bcde2cc06fbb4ecb97605" exitCode=0 Oct 14 10:29:00 crc kubenswrapper[4698]: I1014 10:29:00.853159 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" event={"ID":"374455db-3111-424a-82eb-0960266ac879","Type":"ContainerDied","Data":"c756e2ff8bafb003b0f6693667be5ad7fbc71583df3bcde2cc06fbb4ecb97605"} Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.370941 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.412332 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory\") pod \"374455db-3111-424a-82eb-0960266ac879\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.412459 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrcmb\" (UniqueName: \"kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb\") pod \"374455db-3111-424a-82eb-0960266ac879\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.412522 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key\") pod \"374455db-3111-424a-82eb-0960266ac879\" (UID: \"374455db-3111-424a-82eb-0960266ac879\") " Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.418253 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb" (OuterVolumeSpecName: "kube-api-access-hrcmb") pod "374455db-3111-424a-82eb-0960266ac879" (UID: "374455db-3111-424a-82eb-0960266ac879"). InnerVolumeSpecName "kube-api-access-hrcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.442228 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "374455db-3111-424a-82eb-0960266ac879" (UID: "374455db-3111-424a-82eb-0960266ac879"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.451532 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory" (OuterVolumeSpecName: "inventory") pod "374455db-3111-424a-82eb-0960266ac879" (UID: "374455db-3111-424a-82eb-0960266ac879"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.514696 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.514728 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374455db-3111-424a-82eb-0960266ac879-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.514738 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrcmb\" (UniqueName: \"kubernetes.io/projected/374455db-3111-424a-82eb-0960266ac879-kube-api-access-hrcmb\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.876810 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" event={"ID":"374455db-3111-424a-82eb-0960266ac879","Type":"ContainerDied","Data":"cfcbe5532de463718305b0fb8376ff2edc52afc58c3f0b4f62630e6be0a2d97f"} Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.877087 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcbe5532de463718305b0fb8376ff2edc52afc58c3f0b4f62630e6be0a2d97f" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.876882 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xc794" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.949289 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8"] Oct 14 10:29:02 crc kubenswrapper[4698]: E1014 10:29:02.949673 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374455db-3111-424a-82eb-0960266ac879" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.949689 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="374455db-3111-424a-82eb-0960266ac879" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.949934 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="374455db-3111-424a-82eb-0960266ac879" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.950584 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.955571 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.955605 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.955832 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.961711 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:29:02 crc kubenswrapper[4698]: I1014 10:29:02.963430 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8"] Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.017213 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:29:03 crc kubenswrapper[4698]: E1014 10:29:03.017542 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.024938 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtctd\" (UniqueName: \"kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.025083 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.025156 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.126728 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.126906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.127108 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtctd\" (UniqueName: \"kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.130789 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.132970 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.146180 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtctd\" (UniqueName: \"kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.281423 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.782815 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8"] Oct 14 10:29:03 crc kubenswrapper[4698]: I1014 10:29:03.885811 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" event={"ID":"310c1648-fc92-4008-8e7c-ff410b890a2b","Type":"ContainerStarted","Data":"5147cb7f2b4253fa5fd1a76cc25078c7b1504860e2b82e2f06c525fdf7d97552"} Oct 14 10:29:04 crc kubenswrapper[4698]: I1014 10:29:04.897546 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" event={"ID":"310c1648-fc92-4008-8e7c-ff410b890a2b","Type":"ContainerStarted","Data":"73d681f451fc95787b5d06c0c2a2e97f914b25cc3155780c1108ca0896a2e332"} Oct 14 10:29:04 crc kubenswrapper[4698]: I1014 10:29:04.924450 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" podStartSLOduration=2.489620981 podStartE2EDuration="2.92442651s" podCreationTimestamp="2025-10-14 10:29:02 +0000 UTC" firstStartedPulling="2025-10-14 10:29:03.788801858 +0000 UTC m=+1925.486101284" lastFinishedPulling="2025-10-14 10:29:04.223607397 +0000 UTC m=+1925.920906813" observedRunningTime="2025-10-14 10:29:04.916842125 +0000 UTC m=+1926.614141561" watchObservedRunningTime="2025-10-14 10:29:04.92442651 +0000 UTC m=+1926.621725966" Oct 14 10:29:14 crc kubenswrapper[4698]: I1014 10:29:14.981749 4698 generic.go:334] "Generic (PLEG): container finished" podID="310c1648-fc92-4008-8e7c-ff410b890a2b" containerID="73d681f451fc95787b5d06c0c2a2e97f914b25cc3155780c1108ca0896a2e332" exitCode=0 Oct 14 10:29:14 crc kubenswrapper[4698]: I1014 10:29:14.981846 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" event={"ID":"310c1648-fc92-4008-8e7c-ff410b890a2b","Type":"ContainerDied","Data":"73d681f451fc95787b5d06c0c2a2e97f914b25cc3155780c1108ca0896a2e332"} Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.535669 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.579418 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory\") pod \"310c1648-fc92-4008-8e7c-ff410b890a2b\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.579589 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key\") pod \"310c1648-fc92-4008-8e7c-ff410b890a2b\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.579649 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtctd\" (UniqueName: \"kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd\") pod \"310c1648-fc92-4008-8e7c-ff410b890a2b\" (UID: \"310c1648-fc92-4008-8e7c-ff410b890a2b\") " Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.590915 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd" (OuterVolumeSpecName: "kube-api-access-qtctd") pod "310c1648-fc92-4008-8e7c-ff410b890a2b" (UID: "310c1648-fc92-4008-8e7c-ff410b890a2b"). InnerVolumeSpecName "kube-api-access-qtctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.608281 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "310c1648-fc92-4008-8e7c-ff410b890a2b" (UID: "310c1648-fc92-4008-8e7c-ff410b890a2b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.624100 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory" (OuterVolumeSpecName: "inventory") pod "310c1648-fc92-4008-8e7c-ff410b890a2b" (UID: "310c1648-fc92-4008-8e7c-ff410b890a2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.681819 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.681865 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/310c1648-fc92-4008-8e7c-ff410b890a2b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:16 crc kubenswrapper[4698]: I1014 10:29:16.681878 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtctd\" (UniqueName: \"kubernetes.io/projected/310c1648-fc92-4008-8e7c-ff410b890a2b-kube-api-access-qtctd\") on node \"crc\" DevicePath \"\"" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.006805 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.006744 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8" event={"ID":"310c1648-fc92-4008-8e7c-ff410b890a2b","Type":"ContainerDied","Data":"5147cb7f2b4253fa5fd1a76cc25078c7b1504860e2b82e2f06c525fdf7d97552"} Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.007063 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5147cb7f2b4253fa5fd1a76cc25078c7b1504860e2b82e2f06c525fdf7d97552" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.131932 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb"] Oct 14 10:29:17 crc kubenswrapper[4698]: E1014 10:29:17.132884 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310c1648-fc92-4008-8e7c-ff410b890a2b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.132918 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="310c1648-fc92-4008-8e7c-ff410b890a2b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.133253 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="310c1648-fc92-4008-8e7c-ff410b890a2b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.134196 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138142 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138291 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138441 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138179 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138619 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138663 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138228 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.138390 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.142750 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb"] Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195527 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195585 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195656 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195805 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195857 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.195927 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196072 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196499 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jnj\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196597 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196703 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196794 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.196916 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.197061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299267 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299350 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jnj\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299382 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299422 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299445 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299478 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299535 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299607 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299640 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299673 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299719 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.299749 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.305724 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.305932 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.305835 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.306834 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.307787 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.308063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.308186 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.308397 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.308794 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.309430 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.309880 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.311116 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.311409 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.321903 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jnj\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:17 crc kubenswrapper[4698]: I1014 10:29:17.459855 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:29:18 crc kubenswrapper[4698]: I1014 10:29:18.015662 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb"] Oct 14 10:29:18 crc kubenswrapper[4698]: I1014 10:29:18.016668 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:29:18 crc kubenswrapper[4698]: E1014 10:29:18.017089 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:29:18 crc kubenswrapper[4698]: I1014 10:29:18.018549 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" event={"ID":"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53","Type":"ContainerStarted","Data":"e74f1b445af4381ed54009c1218ec72985929ab0fa5b960f253039cd19839271"} Oct 14 10:29:19 crc kubenswrapper[4698]: I1014 10:29:19.033263 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" event={"ID":"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53","Type":"ContainerStarted","Data":"9f942546744d0310ce58a43028e190a8c0d4283c4eff5890262d93e54d5f7239"} Oct 14 10:29:19 crc kubenswrapper[4698]: I1014 10:29:19.087886 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" podStartSLOduration=1.4642680160000001 podStartE2EDuration="2.087859302s" podCreationTimestamp="2025-10-14 10:29:17 +0000 UTC" firstStartedPulling="2025-10-14 10:29:18.004988805 +0000 UTC m=+1939.702288221" lastFinishedPulling="2025-10-14 10:29:18.628580091 +0000 UTC m=+1940.325879507" observedRunningTime="2025-10-14 10:29:19.068673328 +0000 UTC m=+1940.765972754" watchObservedRunningTime="2025-10-14 10:29:19.087859302 +0000 UTC m=+1940.785158738" Oct 14 10:29:30 crc kubenswrapper[4698]: I1014 10:29:30.018605 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:29:30 crc kubenswrapper[4698]: E1014 10:29:30.019855 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:29:43 crc kubenswrapper[4698]: I1014 10:29:43.017665 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:29:43 crc kubenswrapper[4698]: E1014 10:29:43.018654 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:29:55 crc kubenswrapper[4698]: I1014 10:29:55.016970 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:29:55 crc kubenswrapper[4698]: I1014 10:29:55.438150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968"} Oct 14 10:29:59 crc kubenswrapper[4698]: I1014 10:29:59.479524 4698 generic.go:334] "Generic (PLEG): container finished" podID="1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" containerID="9f942546744d0310ce58a43028e190a8c0d4283c4eff5890262d93e54d5f7239" exitCode=0 Oct 14 10:29:59 crc kubenswrapper[4698]: I1014 10:29:59.479706 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" event={"ID":"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53","Type":"ContainerDied","Data":"9f942546744d0310ce58a43028e190a8c0d4283c4eff5890262d93e54d5f7239"} Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.145146 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d"] Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.146516 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.149356 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.149953 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.158488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d"] Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.262446 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.262574 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.262629 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qh6z\" (UniqueName: \"kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.364270 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.364368 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.364421 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qh6z\" (UniqueName: \"kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.365517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.378068 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.381283 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qh6z\" (UniqueName: \"kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z\") pod \"collect-profiles-29340630-mmb2d\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.467383 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.914388 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.917731 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d"] Oct 14 10:30:00 crc kubenswrapper[4698]: W1014 10:30:00.921674 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ef3450d_a85d_4fed_a424_1a19143c8845.slice/crio-9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d WatchSource:0}: Error finding container 9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d: Status 404 returned error can't find the container with id 9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978055 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978281 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978379 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978446 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978471 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.978491 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979061 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979120 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979138 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5jnj\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979424 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979495 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979554 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.979579 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle\") pod \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\" (UID: \"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53\") " Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.984380 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.984440 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.985088 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.985149 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.985154 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj" (OuterVolumeSpecName: "kube-api-access-f5jnj") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "kube-api-access-f5jnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.985823 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.986296 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.986390 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.986887 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.988431 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.988863 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:00 crc kubenswrapper[4698]: I1014 10:30:00.989301 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.016472 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.019144 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory" (OuterVolumeSpecName: "inventory") pod "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" (UID: "1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083711 4698 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083750 4698 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083779 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083793 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083811 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083824 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083836 4698 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083847 4698 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083861 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083872 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083884 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083894 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083905 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.083919 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5jnj\" (UniqueName: \"kubernetes.io/projected/1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53-kube-api-access-f5jnj\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.500181 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" event={"ID":"1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53","Type":"ContainerDied","Data":"e74f1b445af4381ed54009c1218ec72985929ab0fa5b960f253039cd19839271"} Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.500236 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.500265 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e74f1b445af4381ed54009c1218ec72985929ab0fa5b960f253039cd19839271" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.502406 4698 generic.go:334] "Generic (PLEG): container finished" podID="8ef3450d-a85d-4fed-a424-1a19143c8845" containerID="269d4c0281b54d453accf3b80550d63c37aa11acdb476b310d3cbdba921c5cff" exitCode=0 Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.502448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" event={"ID":"8ef3450d-a85d-4fed-a424-1a19143c8845","Type":"ContainerDied","Data":"269d4c0281b54d453accf3b80550d63c37aa11acdb476b310d3cbdba921c5cff"} Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.502476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" event={"ID":"8ef3450d-a85d-4fed-a424-1a19143c8845","Type":"ContainerStarted","Data":"9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d"} Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.624681 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd"] Oct 14 10:30:01 crc kubenswrapper[4698]: E1014 10:30:01.640131 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.640167 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.641617 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.643524 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.648353 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.648856 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.649080 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.649420 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd"] Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.649628 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.649987 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.697061 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.697413 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.697482 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.697702 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.697772 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmk9\" (UniqueName: \"kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.799357 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.799463 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.799525 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.799553 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmk9\" (UniqueName: \"kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.799585 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.800523 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.806406 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.806456 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.806827 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.820092 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmk9\" (UniqueName: \"kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qwfqd\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:01 crc kubenswrapper[4698]: I1014 10:30:01.967678 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.465942 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd"] Oct 14 10:30:02 crc kubenswrapper[4698]: W1014 10:30:02.469728 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464b6c8a_27cc_4899_a7ed_5e2d022e91da.slice/crio-712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de WatchSource:0}: Error finding container 712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de: Status 404 returned error can't find the container with id 712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.516822 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" event={"ID":"464b6c8a-27cc-4899-a7ed-5e2d022e91da","Type":"ContainerStarted","Data":"712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de"} Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.862833 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.950839 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qh6z\" (UniqueName: \"kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z\") pod \"8ef3450d-a85d-4fed-a424-1a19143c8845\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.951033 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume\") pod \"8ef3450d-a85d-4fed-a424-1a19143c8845\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.951149 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume\") pod \"8ef3450d-a85d-4fed-a424-1a19143c8845\" (UID: \"8ef3450d-a85d-4fed-a424-1a19143c8845\") " Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.951979 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ef3450d-a85d-4fed-a424-1a19143c8845" (UID: "8ef3450d-a85d-4fed-a424-1a19143c8845"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.958220 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z" (OuterVolumeSpecName: "kube-api-access-2qh6z") pod "8ef3450d-a85d-4fed-a424-1a19143c8845" (UID: "8ef3450d-a85d-4fed-a424-1a19143c8845"). InnerVolumeSpecName "kube-api-access-2qh6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:30:02 crc kubenswrapper[4698]: I1014 10:30:02.958903 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ef3450d-a85d-4fed-a424-1a19143c8845" (UID: "8ef3450d-a85d-4fed-a424-1a19143c8845"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.055260 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef3450d-a85d-4fed-a424-1a19143c8845-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.055334 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ef3450d-a85d-4fed-a424-1a19143c8845-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.055369 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qh6z\" (UniqueName: \"kubernetes.io/projected/8ef3450d-a85d-4fed-a424-1a19143c8845-kube-api-access-2qh6z\") on node \"crc\" DevicePath \"\"" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.536332 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" event={"ID":"8ef3450d-a85d-4fed-a424-1a19143c8845","Type":"ContainerDied","Data":"9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d"} Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.536865 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d87f4b44d73be0254024f01b086be8519ebdb8ec65bdd8862ccf3fcb6ff1d6d" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.536948 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.541704 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" event={"ID":"464b6c8a-27cc-4899-a7ed-5e2d022e91da","Type":"ContainerStarted","Data":"641fb233f285ead768407c777435796755e03be15ca859ddf6755df2b152dba8"} Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.566327 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" podStartSLOduration=1.8915448860000001 podStartE2EDuration="2.566289909s" podCreationTimestamp="2025-10-14 10:30:01 +0000 UTC" firstStartedPulling="2025-10-14 10:30:02.472456501 +0000 UTC m=+1984.169755917" lastFinishedPulling="2025-10-14 10:30:03.147201524 +0000 UTC m=+1984.844500940" observedRunningTime="2025-10-14 10:30:03.564021764 +0000 UTC m=+1985.261321210" watchObservedRunningTime="2025-10-14 10:30:03.566289909 +0000 UTC m=+1985.263589355" Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.938736 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c"] Oct 14 10:30:03 crc kubenswrapper[4698]: I1014 10:30:03.950361 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-4sc7c"] Oct 14 10:30:05 crc kubenswrapper[4698]: I1014 10:30:05.039632 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6a1a6a-f7f3-402b-9568-89c9415eaaa4" path="/var/lib/kubelet/pods/aa6a1a6a-f7f3-402b-9568-89c9415eaaa4/volumes" Oct 14 10:30:11 crc kubenswrapper[4698]: I1014 10:30:11.045505 4698 scope.go:117] "RemoveContainer" containerID="e44571d8c0581614f90fc1c2803d9e04b635a448b30211d6b555fd1ea83951f4" Oct 14 10:31:09 crc kubenswrapper[4698]: I1014 10:31:09.340196 4698 generic.go:334] "Generic (PLEG): container finished" podID="464b6c8a-27cc-4899-a7ed-5e2d022e91da" containerID="641fb233f285ead768407c777435796755e03be15ca859ddf6755df2b152dba8" exitCode=0 Oct 14 10:31:09 crc kubenswrapper[4698]: I1014 10:31:09.340289 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" event={"ID":"464b6c8a-27cc-4899-a7ed-5e2d022e91da","Type":"ContainerDied","Data":"641fb233f285ead768407c777435796755e03be15ca859ddf6755df2b152dba8"} Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.775424 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.876407 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory\") pod \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.876475 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle\") pod \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.876540 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmk9\" (UniqueName: \"kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9\") pod \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.876622 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key\") pod \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.876651 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0\") pod \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\" (UID: \"464b6c8a-27cc-4899-a7ed-5e2d022e91da\") " Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.882199 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "464b6c8a-27cc-4899-a7ed-5e2d022e91da" (UID: "464b6c8a-27cc-4899-a7ed-5e2d022e91da"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.882762 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9" (OuterVolumeSpecName: "kube-api-access-wpmk9") pod "464b6c8a-27cc-4899-a7ed-5e2d022e91da" (UID: "464b6c8a-27cc-4899-a7ed-5e2d022e91da"). InnerVolumeSpecName "kube-api-access-wpmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.908033 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "464b6c8a-27cc-4899-a7ed-5e2d022e91da" (UID: "464b6c8a-27cc-4899-a7ed-5e2d022e91da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.910410 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "464b6c8a-27cc-4899-a7ed-5e2d022e91da" (UID: "464b6c8a-27cc-4899-a7ed-5e2d022e91da"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.913000 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory" (OuterVolumeSpecName: "inventory") pod "464b6c8a-27cc-4899-a7ed-5e2d022e91da" (UID: "464b6c8a-27cc-4899-a7ed-5e2d022e91da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.979047 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.979080 4698 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.979090 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmk9\" (UniqueName: \"kubernetes.io/projected/464b6c8a-27cc-4899-a7ed-5e2d022e91da-kube-api-access-wpmk9\") on node \"crc\" DevicePath \"\"" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.979098 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:31:10 crc kubenswrapper[4698]: I1014 10:31:10.979107 4698 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/464b6c8a-27cc-4899-a7ed-5e2d022e91da-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.360087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" event={"ID":"464b6c8a-27cc-4899-a7ed-5e2d022e91da","Type":"ContainerDied","Data":"712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de"} Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.360130 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qwfqd" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.360151 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="712fa771e5c62627faae8ad5ec6268041da3fc71b07af73f6188e217f74224de" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.443584 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6"] Oct 14 10:31:11 crc kubenswrapper[4698]: E1014 10:31:11.444619 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef3450d-a85d-4fed-a424-1a19143c8845" containerName="collect-profiles" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.444643 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef3450d-a85d-4fed-a424-1a19143c8845" containerName="collect-profiles" Oct 14 10:31:11 crc kubenswrapper[4698]: E1014 10:31:11.444677 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464b6c8a-27cc-4899-a7ed-5e2d022e91da" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.444691 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="464b6c8a-27cc-4899-a7ed-5e2d022e91da" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.444969 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef3450d-a85d-4fed-a424-1a19143c8845" containerName="collect-profiles" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.445002 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="464b6c8a-27cc-4899-a7ed-5e2d022e91da" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.445904 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.447808 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.447770 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.447892 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.448313 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.448349 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.451473 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.453524 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6"] Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589270 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589378 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589415 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589469 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dc8l\" (UniqueName: \"kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589542 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.589922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692266 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692382 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692491 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692573 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dc8l\" (UniqueName: \"kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.692666 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.699002 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.700341 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.700996 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.702316 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.702638 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.712375 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dc8l\" (UniqueName: \"kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:11 crc kubenswrapper[4698]: I1014 10:31:11.760402 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:31:12 crc kubenswrapper[4698]: I1014 10:31:12.350220 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6"] Oct 14 10:31:12 crc kubenswrapper[4698]: I1014 10:31:12.378890 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" event={"ID":"9f3eaa62-6c1e-406d-acec-135973addacf","Type":"ContainerStarted","Data":"a4c0bf5ea759bed5d884e1ec780839785182ba3f70398fbaa6edf5ddc3de1fc5"} Oct 14 10:31:13 crc kubenswrapper[4698]: I1014 10:31:13.390990 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" event={"ID":"9f3eaa62-6c1e-406d-acec-135973addacf","Type":"ContainerStarted","Data":"acc18e4a48407dda82d0a398d1ae073348adbb7e8addf9dcd83afa4562c184e7"} Oct 14 10:31:13 crc kubenswrapper[4698]: I1014 10:31:13.417524 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" podStartSLOduration=1.975782062 podStartE2EDuration="2.417496427s" podCreationTimestamp="2025-10-14 10:31:11 +0000 UTC" firstStartedPulling="2025-10-14 10:31:12.363424048 +0000 UTC m=+2054.060723474" lastFinishedPulling="2025-10-14 10:31:12.805138433 +0000 UTC m=+2054.502437839" observedRunningTime="2025-10-14 10:31:13.406887176 +0000 UTC m=+2055.104186602" watchObservedRunningTime="2025-10-14 10:31:13.417496427 +0000 UTC m=+2055.114795853" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.187538 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.192224 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.200021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.381876 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.382006 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.382039 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qc9k\" (UniqueName: \"kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.483810 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.483900 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.483927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qc9k\" (UniqueName: \"kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.484805 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.485027 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.502559 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qc9k\" (UniqueName: \"kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k\") pod \"redhat-marketplace-89zrv\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.570587 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.906130 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f3eaa62-6c1e-406d-acec-135973addacf" containerID="acc18e4a48407dda82d0a398d1ae073348adbb7e8addf9dcd83afa4562c184e7" exitCode=0 Oct 14 10:32:00 crc kubenswrapper[4698]: I1014 10:32:00.906177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" event={"ID":"9f3eaa62-6c1e-406d-acec-135973addacf","Type":"ContainerDied","Data":"acc18e4a48407dda82d0a398d1ae073348adbb7e8addf9dcd83afa4562c184e7"} Oct 14 10:32:01 crc kubenswrapper[4698]: I1014 10:32:01.045046 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:01 crc kubenswrapper[4698]: I1014 10:32:01.916853 4698 generic.go:334] "Generic (PLEG): container finished" podID="48677471-a50b-4942-8fda-e286d69efbff" containerID="920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a" exitCode=0 Oct 14 10:32:01 crc kubenswrapper[4698]: I1014 10:32:01.916914 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerDied","Data":"920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a"} Oct 14 10:32:01 crc kubenswrapper[4698]: I1014 10:32:01.917206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerStarted","Data":"dacdd0420af01927d0a9a92a1fd9b78ad5caf7a37e86eee68a996daf06cef902"} Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.329760 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.421949 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.422044 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.422065 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.422095 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dc8l\" (UniqueName: \"kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.422123 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.422157 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle\") pod \"9f3eaa62-6c1e-406d-acec-135973addacf\" (UID: \"9f3eaa62-6c1e-406d-acec-135973addacf\") " Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.429225 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l" (OuterVolumeSpecName: "kube-api-access-4dc8l") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "kube-api-access-4dc8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.432658 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.451379 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.459667 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory" (OuterVolumeSpecName: "inventory") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.459780 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.459889 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9f3eaa62-6c1e-406d-acec-135973addacf" (UID: "9f3eaa62-6c1e-406d-acec-135973addacf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.523955 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.524000 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.524015 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dc8l\" (UniqueName: \"kubernetes.io/projected/9f3eaa62-6c1e-406d-acec-135973addacf-kube-api-access-4dc8l\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.524030 4698 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.524043 4698 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.524056 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f3eaa62-6c1e-406d-acec-135973addacf-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.928628 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerStarted","Data":"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a"} Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.932394 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" event={"ID":"9f3eaa62-6c1e-406d-acec-135973addacf","Type":"ContainerDied","Data":"a4c0bf5ea759bed5d884e1ec780839785182ba3f70398fbaa6edf5ddc3de1fc5"} Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.932443 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c0bf5ea759bed5d884e1ec780839785182ba3f70398fbaa6edf5ddc3de1fc5" Oct 14 10:32:02 crc kubenswrapper[4698]: I1014 10:32:02.932518 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.036241 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl"] Oct 14 10:32:03 crc kubenswrapper[4698]: E1014 10:32:03.036649 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3eaa62-6c1e-406d-acec-135973addacf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.036673 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3eaa62-6c1e-406d-acec-135973addacf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.036980 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3eaa62-6c1e-406d-acec-135973addacf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.037666 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.043844 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.043939 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.044066 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.044433 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl"] Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.045080 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.045294 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.238799 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czb8\" (UniqueName: \"kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.239496 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.239846 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.240074 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.240205 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.341801 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.341927 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.341965 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.341982 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.342028 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czb8\" (UniqueName: \"kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.346338 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.347114 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.347198 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.349195 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.363916 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czb8\" (UniqueName: \"kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.364863 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.776304 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8qgcm"] Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.778859 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.801086 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8qgcm"] Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.854278 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pq2h\" (UniqueName: \"kubernetes.io/projected/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-kube-api-access-9pq2h\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.854341 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-utilities\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.854367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-catalog-content\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.904404 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl"] Oct 14 10:32:03 crc kubenswrapper[4698]: W1014 10:32:03.912960 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141d36f8_e9f9_4959_8f0c_09c649350547.slice/crio-2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848 WatchSource:0}: Error finding container 2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848: Status 404 returned error can't find the container with id 2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848 Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.942494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" event={"ID":"141d36f8-e9f9-4959-8f0c-09c649350547","Type":"ContainerStarted","Data":"2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848"} Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.945206 4698 generic.go:334] "Generic (PLEG): container finished" podID="48677471-a50b-4942-8fda-e286d69efbff" containerID="a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a" exitCode=0 Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.945269 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerDied","Data":"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a"} Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.956744 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-catalog-content\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.956943 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pq2h\" (UniqueName: \"kubernetes.io/projected/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-kube-api-access-9pq2h\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.957032 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-utilities\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.957592 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-utilities\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.958275 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-catalog-content\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:03 crc kubenswrapper[4698]: I1014 10:32:03.980308 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pq2h\" (UniqueName: \"kubernetes.io/projected/a05b8344-c3cc-41ea-88c6-e13f29ebedbb-kube-api-access-9pq2h\") pod \"community-operators-8qgcm\" (UID: \"a05b8344-c3cc-41ea-88c6-e13f29ebedbb\") " pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:04 crc kubenswrapper[4698]: I1014 10:32:04.149783 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:04 crc kubenswrapper[4698]: I1014 10:32:04.661989 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8qgcm"] Oct 14 10:32:04 crc kubenswrapper[4698]: I1014 10:32:04.969633 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerStarted","Data":"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b"} Oct 14 10:32:04 crc kubenswrapper[4698]: I1014 10:32:04.980287 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" event={"ID":"141d36f8-e9f9-4959-8f0c-09c649350547","Type":"ContainerStarted","Data":"5072f12cab676bc7f6bb8a112932eeee350b93cd3c69abd4a86065e9bffcca00"} Oct 14 10:32:05 crc kubenswrapper[4698]: I1014 10:32:05.002527 4698 generic.go:334] "Generic (PLEG): container finished" podID="a05b8344-c3cc-41ea-88c6-e13f29ebedbb" containerID="5757c9b08ef4da8a4465c51f13c62ff6d338a5ed8a2d58b62050846a0a66086c" exitCode=0 Oct 14 10:32:05 crc kubenswrapper[4698]: I1014 10:32:05.002580 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qgcm" event={"ID":"a05b8344-c3cc-41ea-88c6-e13f29ebedbb","Type":"ContainerDied","Data":"5757c9b08ef4da8a4465c51f13c62ff6d338a5ed8a2d58b62050846a0a66086c"} Oct 14 10:32:05 crc kubenswrapper[4698]: I1014 10:32:05.002609 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qgcm" event={"ID":"a05b8344-c3cc-41ea-88c6-e13f29ebedbb","Type":"ContainerStarted","Data":"3e81194317d0b2c8ef7e9b7d65d25148fba27a44115ddc64e7228153f0cf494d"} Oct 14 10:32:05 crc kubenswrapper[4698]: I1014 10:32:05.012233 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89zrv" podStartSLOduration=2.485421851 podStartE2EDuration="5.012191262s" podCreationTimestamp="2025-10-14 10:32:00 +0000 UTC" firstStartedPulling="2025-10-14 10:32:01.920314627 +0000 UTC m=+2103.617614043" lastFinishedPulling="2025-10-14 10:32:04.447084038 +0000 UTC m=+2106.144383454" observedRunningTime="2025-10-14 10:32:05.001805838 +0000 UTC m=+2106.699105274" watchObservedRunningTime="2025-10-14 10:32:05.012191262 +0000 UTC m=+2106.709490678" Oct 14 10:32:05 crc kubenswrapper[4698]: I1014 10:32:05.027117 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" podStartSLOduration=1.415958705 podStartE2EDuration="2.027096695s" podCreationTimestamp="2025-10-14 10:32:03 +0000 UTC" firstStartedPulling="2025-10-14 10:32:03.916217844 +0000 UTC m=+2105.613517270" lastFinishedPulling="2025-10-14 10:32:04.527355834 +0000 UTC m=+2106.224655260" observedRunningTime="2025-10-14 10:32:05.018627795 +0000 UTC m=+2106.715927211" watchObservedRunningTime="2025-10-14 10:32:05.027096695 +0000 UTC m=+2106.724396111" Oct 14 10:32:09 crc kubenswrapper[4698]: I1014 10:32:09.066977 4698 generic.go:334] "Generic (PLEG): container finished" podID="a05b8344-c3cc-41ea-88c6-e13f29ebedbb" containerID="d74ea55b584e02d301e98dcd4785aea1f7d3ebc32587ea68f1585a4f86c0f9e9" exitCode=0 Oct 14 10:32:09 crc kubenswrapper[4698]: I1014 10:32:09.067157 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qgcm" event={"ID":"a05b8344-c3cc-41ea-88c6-e13f29ebedbb","Type":"ContainerDied","Data":"d74ea55b584e02d301e98dcd4785aea1f7d3ebc32587ea68f1585a4f86c0f9e9"} Oct 14 10:32:10 crc kubenswrapper[4698]: I1014 10:32:10.571322 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:10 crc kubenswrapper[4698]: I1014 10:32:10.571885 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:10 crc kubenswrapper[4698]: I1014 10:32:10.622658 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:11 crc kubenswrapper[4698]: I1014 10:32:11.088064 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8qgcm" event={"ID":"a05b8344-c3cc-41ea-88c6-e13f29ebedbb","Type":"ContainerStarted","Data":"06eb137cce262092cefa8dae5c2cc1fc7bef9dd644f334b397d25e9a68ae7a1f"} Oct 14 10:32:11 crc kubenswrapper[4698]: I1014 10:32:11.108358 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8qgcm" podStartSLOduration=3.235244613 podStartE2EDuration="8.108340649s" podCreationTimestamp="2025-10-14 10:32:03 +0000 UTC" firstStartedPulling="2025-10-14 10:32:05.004897205 +0000 UTC m=+2106.702196621" lastFinishedPulling="2025-10-14 10:32:09.877993221 +0000 UTC m=+2111.575292657" observedRunningTime="2025-10-14 10:32:11.104585462 +0000 UTC m=+2112.801884878" watchObservedRunningTime="2025-10-14 10:32:11.108340649 +0000 UTC m=+2112.805640065" Oct 14 10:32:11 crc kubenswrapper[4698]: I1014 10:32:11.140725 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:12 crc kubenswrapper[4698]: I1014 10:32:12.802184 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.104185 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89zrv" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="registry-server" containerID="cri-o://f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b" gracePeriod=2 Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.576683 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.671917 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities\") pod \"48677471-a50b-4942-8fda-e286d69efbff\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.672043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content\") pod \"48677471-a50b-4942-8fda-e286d69efbff\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.672150 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qc9k\" (UniqueName: \"kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k\") pod \"48677471-a50b-4942-8fda-e286d69efbff\" (UID: \"48677471-a50b-4942-8fda-e286d69efbff\") " Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.673540 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities" (OuterVolumeSpecName: "utilities") pod "48677471-a50b-4942-8fda-e286d69efbff" (UID: "48677471-a50b-4942-8fda-e286d69efbff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.686653 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48677471-a50b-4942-8fda-e286d69efbff" (UID: "48677471-a50b-4942-8fda-e286d69efbff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.687470 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k" (OuterVolumeSpecName: "kube-api-access-4qc9k") pod "48677471-a50b-4942-8fda-e286d69efbff" (UID: "48677471-a50b-4942-8fda-e286d69efbff"). InnerVolumeSpecName "kube-api-access-4qc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.773599 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.773639 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48677471-a50b-4942-8fda-e286d69efbff-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:13 crc kubenswrapper[4698]: I1014 10:32:13.773654 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qc9k\" (UniqueName: \"kubernetes.io/projected/48677471-a50b-4942-8fda-e286d69efbff-kube-api-access-4qc9k\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.117023 4698 generic.go:334] "Generic (PLEG): container finished" podID="48677471-a50b-4942-8fda-e286d69efbff" containerID="f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b" exitCode=0 Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.117090 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89zrv" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.117078 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerDied","Data":"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b"} Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.117253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89zrv" event={"ID":"48677471-a50b-4942-8fda-e286d69efbff","Type":"ContainerDied","Data":"dacdd0420af01927d0a9a92a1fd9b78ad5caf7a37e86eee68a996daf06cef902"} Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.117277 4698 scope.go:117] "RemoveContainer" containerID="f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.148564 4698 scope.go:117] "RemoveContainer" containerID="a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.150139 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.150546 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.158127 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.167256 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89zrv"] Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.179186 4698 scope.go:117] "RemoveContainer" containerID="920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.211154 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.247583 4698 scope.go:117] "RemoveContainer" containerID="f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b" Oct 14 10:32:14 crc kubenswrapper[4698]: E1014 10:32:14.248158 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b\": container with ID starting with f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b not found: ID does not exist" containerID="f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.248191 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b"} err="failed to get container status \"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b\": rpc error: code = NotFound desc = could not find container \"f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b\": container with ID starting with f7ae120160a75bf6f0ca1913da15eac57df264121b3acd82be0a8318ac516f5b not found: ID does not exist" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.248213 4698 scope.go:117] "RemoveContainer" containerID="a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a" Oct 14 10:32:14 crc kubenswrapper[4698]: E1014 10:32:14.248577 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a\": container with ID starting with a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a not found: ID does not exist" containerID="a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.248600 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a"} err="failed to get container status \"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a\": rpc error: code = NotFound desc = could not find container \"a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a\": container with ID starting with a395ae12d18cd4fad094303430e4302556de866c59bb4d3fe8d348a2dfdebc3a not found: ID does not exist" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.248614 4698 scope.go:117] "RemoveContainer" containerID="920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a" Oct 14 10:32:14 crc kubenswrapper[4698]: E1014 10:32:14.249154 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a\": container with ID starting with 920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a not found: ID does not exist" containerID="920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a" Oct 14 10:32:14 crc kubenswrapper[4698]: I1014 10:32:14.249210 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a"} err="failed to get container status \"920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a\": rpc error: code = NotFound desc = could not find container \"920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a\": container with ID starting with 920b5efae5bde5597bf5dbf001c1ce3f0b4b88e31e7b7b8d7e5143e543530f7a not found: ID does not exist" Oct 14 10:32:15 crc kubenswrapper[4698]: I1014 10:32:15.032366 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48677471-a50b-4942-8fda-e286d69efbff" path="/var/lib/kubelet/pods/48677471-a50b-4942-8fda-e286d69efbff/volumes" Oct 14 10:32:15 crc kubenswrapper[4698]: I1014 10:32:15.186294 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8qgcm" Oct 14 10:32:16 crc kubenswrapper[4698]: I1014 10:32:16.198844 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8qgcm"] Oct 14 10:32:16 crc kubenswrapper[4698]: I1014 10:32:16.569030 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:32:16 crc kubenswrapper[4698]: I1014 10:32:16.569367 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dj96p" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="registry-server" containerID="cri-o://7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75" gracePeriod=2 Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.037879 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.136415 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content\") pod \"237bd431-a961-4f87-a13c-2278c27b67e0\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.136487 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities\") pod \"237bd431-a961-4f87-a13c-2278c27b67e0\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.136553 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhsbz\" (UniqueName: \"kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz\") pod \"237bd431-a961-4f87-a13c-2278c27b67e0\" (UID: \"237bd431-a961-4f87-a13c-2278c27b67e0\") " Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.137338 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities" (OuterVolumeSpecName: "utilities") pod "237bd431-a961-4f87-a13c-2278c27b67e0" (UID: "237bd431-a961-4f87-a13c-2278c27b67e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.152880 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz" (OuterVolumeSpecName: "kube-api-access-bhsbz") pod "237bd431-a961-4f87-a13c-2278c27b67e0" (UID: "237bd431-a961-4f87-a13c-2278c27b67e0"). InnerVolumeSpecName "kube-api-access-bhsbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.153665 4698 generic.go:334] "Generic (PLEG): container finished" podID="237bd431-a961-4f87-a13c-2278c27b67e0" containerID="7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75" exitCode=0 Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.154072 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj96p" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.154374 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerDied","Data":"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75"} Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.154435 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj96p" event={"ID":"237bd431-a961-4f87-a13c-2278c27b67e0","Type":"ContainerDied","Data":"8a59cc73c9eef710b25476591a2a38454169139ad49089138745fee66e31506f"} Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.154461 4698 scope.go:117] "RemoveContainer" containerID="7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.217054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "237bd431-a961-4f87-a13c-2278c27b67e0" (UID: "237bd431-a961-4f87-a13c-2278c27b67e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.229563 4698 scope.go:117] "RemoveContainer" containerID="0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.239330 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.239603 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237bd431-a961-4f87-a13c-2278c27b67e0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.239696 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhsbz\" (UniqueName: \"kubernetes.io/projected/237bd431-a961-4f87-a13c-2278c27b67e0-kube-api-access-bhsbz\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.259928 4698 scope.go:117] "RemoveContainer" containerID="3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.303821 4698 scope.go:117] "RemoveContainer" containerID="7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75" Oct 14 10:32:17 crc kubenswrapper[4698]: E1014 10:32:17.304445 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75\": container with ID starting with 7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75 not found: ID does not exist" containerID="7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.304494 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75"} err="failed to get container status \"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75\": rpc error: code = NotFound desc = could not find container \"7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75\": container with ID starting with 7b77e86553db3348e2661f58c37a54fa73092428a96b6c1d16e5b71fb0042c75 not found: ID does not exist" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.304525 4698 scope.go:117] "RemoveContainer" containerID="0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc" Oct 14 10:32:17 crc kubenswrapper[4698]: E1014 10:32:17.305106 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc\": container with ID starting with 0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc not found: ID does not exist" containerID="0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.305162 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc"} err="failed to get container status \"0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc\": rpc error: code = NotFound desc = could not find container \"0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc\": container with ID starting with 0deaf20d2720788024f4bd829e95b15d9e3740df01e18f2db5f553c0cfb881fc not found: ID does not exist" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.305191 4698 scope.go:117] "RemoveContainer" containerID="3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65" Oct 14 10:32:17 crc kubenswrapper[4698]: E1014 10:32:17.305558 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65\": container with ID starting with 3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65 not found: ID does not exist" containerID="3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.305602 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65"} err="failed to get container status \"3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65\": rpc error: code = NotFound desc = could not find container \"3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65\": container with ID starting with 3f96ecc6bebd6407d051803a2d9772ec1e2616fcaf31c3e9ad66d57ba2b80c65 not found: ID does not exist" Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.492435 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:32:17 crc kubenswrapper[4698]: I1014 10:32:17.504154 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dj96p"] Oct 14 10:32:19 crc kubenswrapper[4698]: I1014 10:32:19.027123 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" path="/var/lib/kubelet/pods/237bd431-a961-4f87-a13c-2278c27b67e0/volumes" Oct 14 10:32:23 crc kubenswrapper[4698]: I1014 10:32:23.907978 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:32:23 crc kubenswrapper[4698]: I1014 10:32:23.908499 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.383305 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384434 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="extract-utilities" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384483 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="extract-utilities" Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384516 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384524 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384542 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="extract-content" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384550 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="extract-content" Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384561 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="extract-utilities" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384569 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="extract-utilities" Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384589 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384597 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: E1014 10:32:27.384617 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="extract-content" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384626 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="extract-content" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384892 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="48677471-a50b-4942-8fda-e286d69efbff" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.384932 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="237bd431-a961-4f87-a13c-2278c27b67e0" containerName="registry-server" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.386841 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.406296 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.459535 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.459614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qxz\" (UniqueName: \"kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.459637 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.561909 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.561990 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qxz\" (UniqueName: \"kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.562010 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.562459 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.562660 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.582838 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qxz\" (UniqueName: \"kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz\") pod \"certified-operators-97zr5\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:27 crc kubenswrapper[4698]: I1014 10:32:27.705599 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:28 crc kubenswrapper[4698]: I1014 10:32:28.310926 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:29 crc kubenswrapper[4698]: I1014 10:32:29.274856 4698 generic.go:334] "Generic (PLEG): container finished" podID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerID="f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d" exitCode=0 Oct 14 10:32:29 crc kubenswrapper[4698]: I1014 10:32:29.274955 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerDied","Data":"f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d"} Oct 14 10:32:29 crc kubenswrapper[4698]: I1014 10:32:29.277224 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerStarted","Data":"47180a70c4c51adef78cd1c8cf672cc94fddc5ceb5ef149a1b55fa8b6253bec0"} Oct 14 10:32:30 crc kubenswrapper[4698]: I1014 10:32:30.291488 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerStarted","Data":"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9"} Oct 14 10:32:32 crc kubenswrapper[4698]: I1014 10:32:32.342371 4698 generic.go:334] "Generic (PLEG): container finished" podID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerID="aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9" exitCode=0 Oct 14 10:32:32 crc kubenswrapper[4698]: I1014 10:32:32.342440 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerDied","Data":"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9"} Oct 14 10:32:33 crc kubenswrapper[4698]: I1014 10:32:33.360095 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerStarted","Data":"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f"} Oct 14 10:32:33 crc kubenswrapper[4698]: I1014 10:32:33.393929 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-97zr5" podStartSLOduration=2.89545881 podStartE2EDuration="6.393898165s" podCreationTimestamp="2025-10-14 10:32:27 +0000 UTC" firstStartedPulling="2025-10-14 10:32:29.278071643 +0000 UTC m=+2130.975371069" lastFinishedPulling="2025-10-14 10:32:32.776510998 +0000 UTC m=+2134.473810424" observedRunningTime="2025-10-14 10:32:33.381698189 +0000 UTC m=+2135.078997605" watchObservedRunningTime="2025-10-14 10:32:33.393898165 +0000 UTC m=+2135.091197601" Oct 14 10:32:37 crc kubenswrapper[4698]: I1014 10:32:37.706154 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:37 crc kubenswrapper[4698]: I1014 10:32:37.706820 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:37 crc kubenswrapper[4698]: I1014 10:32:37.770628 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:38 crc kubenswrapper[4698]: I1014 10:32:38.477861 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:39 crc kubenswrapper[4698]: I1014 10:32:39.991543 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:40 crc kubenswrapper[4698]: I1014 10:32:40.430456 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-97zr5" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="registry-server" containerID="cri-o://6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f" gracePeriod=2 Oct 14 10:32:40 crc kubenswrapper[4698]: I1014 10:32:40.942242 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.020081 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content\") pod \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.020236 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities\") pod \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.020378 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qxz\" (UniqueName: \"kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz\") pod \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\" (UID: \"a6d6c363-baf0-41f5-958d-324ee1f97bbb\") " Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.021306 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities" (OuterVolumeSpecName: "utilities") pod "a6d6c363-baf0-41f5-958d-324ee1f97bbb" (UID: "a6d6c363-baf0-41f5-958d-324ee1f97bbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.026486 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz" (OuterVolumeSpecName: "kube-api-access-w4qxz") pod "a6d6c363-baf0-41f5-958d-324ee1f97bbb" (UID: "a6d6c363-baf0-41f5-958d-324ee1f97bbb"). InnerVolumeSpecName "kube-api-access-w4qxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.069228 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6d6c363-baf0-41f5-958d-324ee1f97bbb" (UID: "a6d6c363-baf0-41f5-958d-324ee1f97bbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.122428 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4qxz\" (UniqueName: \"kubernetes.io/projected/a6d6c363-baf0-41f5-958d-324ee1f97bbb-kube-api-access-w4qxz\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.122464 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.122473 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d6c363-baf0-41f5-958d-324ee1f97bbb-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.444621 4698 generic.go:334] "Generic (PLEG): container finished" podID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerID="6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f" exitCode=0 Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.444688 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerDied","Data":"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f"} Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.445628 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-97zr5" event={"ID":"a6d6c363-baf0-41f5-958d-324ee1f97bbb","Type":"ContainerDied","Data":"47180a70c4c51adef78cd1c8cf672cc94fddc5ceb5ef149a1b55fa8b6253bec0"} Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.444879 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-97zr5" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.445710 4698 scope.go:117] "RemoveContainer" containerID="6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.482628 4698 scope.go:117] "RemoveContainer" containerID="aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.516286 4698 scope.go:117] "RemoveContainer" containerID="f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.523073 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.540290 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-97zr5"] Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.572240 4698 scope.go:117] "RemoveContainer" containerID="6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f" Oct 14 10:32:41 crc kubenswrapper[4698]: E1014 10:32:41.573227 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f\": container with ID starting with 6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f not found: ID does not exist" containerID="6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.573286 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f"} err="failed to get container status \"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f\": rpc error: code = NotFound desc = could not find container \"6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f\": container with ID starting with 6c5533b4a4f4b34b7d357a4551ce270bcc6b8c95c13de15f809d9404db38a41f not found: ID does not exist" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.573322 4698 scope.go:117] "RemoveContainer" containerID="aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9" Oct 14 10:32:41 crc kubenswrapper[4698]: E1014 10:32:41.574180 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9\": container with ID starting with aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9 not found: ID does not exist" containerID="aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.574223 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9"} err="failed to get container status \"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9\": rpc error: code = NotFound desc = could not find container \"aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9\": container with ID starting with aa1dd50a54cd0d97b4a57b748a2f1d2ff5f8559b8e325a8c08cd9891ee5407a9 not found: ID does not exist" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.574247 4698 scope.go:117] "RemoveContainer" containerID="f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d" Oct 14 10:32:41 crc kubenswrapper[4698]: E1014 10:32:41.574698 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d\": container with ID starting with f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d not found: ID does not exist" containerID="f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d" Oct 14 10:32:41 crc kubenswrapper[4698]: I1014 10:32:41.574737 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d"} err="failed to get container status \"f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d\": rpc error: code = NotFound desc = could not find container \"f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d\": container with ID starting with f1a91e32cd946b18645d895d97017d965472843cefff97c6a1325b357dffd66d not found: ID does not exist" Oct 14 10:32:43 crc kubenswrapper[4698]: I1014 10:32:43.035125 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" path="/var/lib/kubelet/pods/a6d6c363-baf0-41f5-958d-324ee1f97bbb/volumes" Oct 14 10:32:53 crc kubenswrapper[4698]: I1014 10:32:53.908100 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:32:53 crc kubenswrapper[4698]: I1014 10:32:53.908718 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:33:23 crc kubenswrapper[4698]: I1014 10:33:23.908199 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:33:23 crc kubenswrapper[4698]: I1014 10:33:23.909048 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:33:23 crc kubenswrapper[4698]: I1014 10:33:23.909118 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:33:23 crc kubenswrapper[4698]: I1014 10:33:23.910351 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:33:23 crc kubenswrapper[4698]: I1014 10:33:23.910418 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968" gracePeriod=600 Oct 14 10:33:24 crc kubenswrapper[4698]: I1014 10:33:24.894957 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968" exitCode=0 Oct 14 10:33:24 crc kubenswrapper[4698]: I1014 10:33:24.895031 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968"} Oct 14 10:33:24 crc kubenswrapper[4698]: I1014 10:33:24.896056 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8"} Oct 14 10:33:24 crc kubenswrapper[4698]: I1014 10:33:24.896097 4698 scope.go:117] "RemoveContainer" containerID="87bba9ea9ee8dd67f565bc4ea643f9394d7612f02668f96d9a0e0f0371644851" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.379012 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:34:47 crc kubenswrapper[4698]: E1014 10:34:47.380127 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="extract-content" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.380146 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="extract-content" Oct 14 10:34:47 crc kubenswrapper[4698]: E1014 10:34:47.380170 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="registry-server" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.380177 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="registry-server" Oct 14 10:34:47 crc kubenswrapper[4698]: E1014 10:34:47.380198 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="extract-utilities" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.380205 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="extract-utilities" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.380470 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d6c363-baf0-41f5-958d-324ee1f97bbb" containerName="registry-server" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.382363 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.396562 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.474912 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.474990 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbb4r\" (UniqueName: \"kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.475029 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.577301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.577373 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbb4r\" (UniqueName: \"kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.577421 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.577892 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.578046 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.600157 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbb4r\" (UniqueName: \"kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r\") pod \"redhat-operators-dct5n\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:47 crc kubenswrapper[4698]: I1014 10:34:47.707931 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:48 crc kubenswrapper[4698]: I1014 10:34:48.190664 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:34:48 crc kubenswrapper[4698]: I1014 10:34:48.863102 4698 generic.go:334] "Generic (PLEG): container finished" podID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerID="bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb" exitCode=0 Oct 14 10:34:48 crc kubenswrapper[4698]: I1014 10:34:48.863158 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerDied","Data":"bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb"} Oct 14 10:34:48 crc kubenswrapper[4698]: I1014 10:34:48.863434 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerStarted","Data":"3529c2a89f6df7bda51c3e3dac22b649938307771f8ed52465b31ac8e9a8e14b"} Oct 14 10:34:48 crc kubenswrapper[4698]: I1014 10:34:48.865753 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:34:49 crc kubenswrapper[4698]: I1014 10:34:49.879106 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerStarted","Data":"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523"} Oct 14 10:34:50 crc kubenswrapper[4698]: I1014 10:34:50.892583 4698 generic.go:334] "Generic (PLEG): container finished" podID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerID="6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523" exitCode=0 Oct 14 10:34:50 crc kubenswrapper[4698]: I1014 10:34:50.892944 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerDied","Data":"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523"} Oct 14 10:34:52 crc kubenswrapper[4698]: I1014 10:34:52.925492 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerStarted","Data":"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7"} Oct 14 10:34:52 crc kubenswrapper[4698]: I1014 10:34:52.963256 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dct5n" podStartSLOduration=3.5141882620000002 podStartE2EDuration="5.963221232s" podCreationTimestamp="2025-10-14 10:34:47 +0000 UTC" firstStartedPulling="2025-10-14 10:34:48.865538547 +0000 UTC m=+2270.562837963" lastFinishedPulling="2025-10-14 10:34:51.314571477 +0000 UTC m=+2273.011870933" observedRunningTime="2025-10-14 10:34:52.953542913 +0000 UTC m=+2274.650842369" watchObservedRunningTime="2025-10-14 10:34:52.963221232 +0000 UTC m=+2274.660520678" Oct 14 10:34:57 crc kubenswrapper[4698]: I1014 10:34:57.708130 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:57 crc kubenswrapper[4698]: I1014 10:34:57.708878 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:57 crc kubenswrapper[4698]: I1014 10:34:57.757938 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:58 crc kubenswrapper[4698]: I1014 10:34:58.040658 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:34:58 crc kubenswrapper[4698]: I1014 10:34:58.098411 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:34:59 crc kubenswrapper[4698]: I1014 10:34:59.995569 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dct5n" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="registry-server" containerID="cri-o://80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7" gracePeriod=2 Oct 14 10:35:00 crc kubenswrapper[4698]: E1014 10:35:00.218992 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb80acc04_5fc0_4b28_a44b_6dafa9fe47c2.slice/crio-80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb80acc04_5fc0_4b28_a44b_6dafa9fe47c2.slice/crio-conmon-80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.507694 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.598811 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities\") pod \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.598921 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content\") pod \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.599071 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbb4r\" (UniqueName: \"kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r\") pod \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\" (UID: \"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2\") " Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.599706 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities" (OuterVolumeSpecName: "utilities") pod "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" (UID: "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.606750 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r" (OuterVolumeSpecName: "kube-api-access-sbb4r") pod "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" (UID: "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2"). InnerVolumeSpecName "kube-api-access-sbb4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.706395 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.706437 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbb4r\" (UniqueName: \"kubernetes.io/projected/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-kube-api-access-sbb4r\") on node \"crc\" DevicePath \"\"" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.742166 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" (UID: "b80acc04-5fc0-4b28-a44b-6dafa9fe47c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:35:00 crc kubenswrapper[4698]: I1014 10:35:00.808075 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.007537 4698 generic.go:334] "Generic (PLEG): container finished" podID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerID="80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7" exitCode=0 Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.007580 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerDied","Data":"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7"} Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.007608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dct5n" event={"ID":"b80acc04-5fc0-4b28-a44b-6dafa9fe47c2","Type":"ContainerDied","Data":"3529c2a89f6df7bda51c3e3dac22b649938307771f8ed52465b31ac8e9a8e14b"} Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.007628 4698 scope.go:117] "RemoveContainer" containerID="80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.007744 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dct5n" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.040367 4698 scope.go:117] "RemoveContainer" containerID="6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.044302 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.051821 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dct5n"] Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.075251 4698 scope.go:117] "RemoveContainer" containerID="bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.122226 4698 scope.go:117] "RemoveContainer" containerID="80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7" Oct 14 10:35:01 crc kubenswrapper[4698]: E1014 10:35:01.123488 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7\": container with ID starting with 80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7 not found: ID does not exist" containerID="80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.123717 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7"} err="failed to get container status \"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7\": rpc error: code = NotFound desc = could not find container \"80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7\": container with ID starting with 80f37aa10da90357c0aaadee3e813349c53a5cd272f397ab4a9c88d7b34418e7 not found: ID does not exist" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.123872 4698 scope.go:117] "RemoveContainer" containerID="6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523" Oct 14 10:35:01 crc kubenswrapper[4698]: E1014 10:35:01.124699 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523\": container with ID starting with 6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523 not found: ID does not exist" containerID="6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.124891 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523"} err="failed to get container status \"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523\": rpc error: code = NotFound desc = could not find container \"6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523\": container with ID starting with 6f01372184b0ba25af4d7626ab5f04fbebd74f65230c92700a3fda5ebeed5523 not found: ID does not exist" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.125031 4698 scope.go:117] "RemoveContainer" containerID="bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb" Oct 14 10:35:01 crc kubenswrapper[4698]: E1014 10:35:01.125665 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb\": container with ID starting with bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb not found: ID does not exist" containerID="bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb" Oct 14 10:35:01 crc kubenswrapper[4698]: I1014 10:35:01.125822 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb"} err="failed to get container status \"bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb\": rpc error: code = NotFound desc = could not find container \"bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb\": container with ID starting with bffd4e4d9a58f38ddd1418ee7c95fc690b3827baa486d1844dac1e9d787e84fb not found: ID does not exist" Oct 14 10:35:03 crc kubenswrapper[4698]: I1014 10:35:03.029533 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" path="/var/lib/kubelet/pods/b80acc04-5fc0-4b28-a44b-6dafa9fe47c2/volumes" Oct 14 10:35:53 crc kubenswrapper[4698]: I1014 10:35:53.921099 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:35:53 crc kubenswrapper[4698]: I1014 10:35:53.921696 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:36:11 crc kubenswrapper[4698]: I1014 10:36:11.791204 4698 generic.go:334] "Generic (PLEG): container finished" podID="141d36f8-e9f9-4959-8f0c-09c649350547" containerID="5072f12cab676bc7f6bb8a112932eeee350b93cd3c69abd4a86065e9bffcca00" exitCode=0 Oct 14 10:36:11 crc kubenswrapper[4698]: I1014 10:36:11.791351 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" event={"ID":"141d36f8-e9f9-4959-8f0c-09c649350547","Type":"ContainerDied","Data":"5072f12cab676bc7f6bb8a112932eeee350b93cd3c69abd4a86065e9bffcca00"} Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.252167 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.262113 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0\") pod \"141d36f8-e9f9-4959-8f0c-09c649350547\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.262171 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key\") pod \"141d36f8-e9f9-4959-8f0c-09c649350547\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.262221 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle\") pod \"141d36f8-e9f9-4959-8f0c-09c649350547\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.262370 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6czb8\" (UniqueName: \"kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8\") pod \"141d36f8-e9f9-4959-8f0c-09c649350547\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.271533 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8" (OuterVolumeSpecName: "kube-api-access-6czb8") pod "141d36f8-e9f9-4959-8f0c-09c649350547" (UID: "141d36f8-e9f9-4959-8f0c-09c649350547"). InnerVolumeSpecName "kube-api-access-6czb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.284977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "141d36f8-e9f9-4959-8f0c-09c649350547" (UID: "141d36f8-e9f9-4959-8f0c-09c649350547"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.296408 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "141d36f8-e9f9-4959-8f0c-09c649350547" (UID: "141d36f8-e9f9-4959-8f0c-09c649350547"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.296546 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "141d36f8-e9f9-4959-8f0c-09c649350547" (UID: "141d36f8-e9f9-4959-8f0c-09c649350547"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.364436 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory\") pod \"141d36f8-e9f9-4959-8f0c-09c649350547\" (UID: \"141d36f8-e9f9-4959-8f0c-09c649350547\") " Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.365507 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.365550 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.365580 4698 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.365606 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6czb8\" (UniqueName: \"kubernetes.io/projected/141d36f8-e9f9-4959-8f0c-09c649350547-kube-api-access-6czb8\") on node \"crc\" DevicePath \"\"" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.388723 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory" (OuterVolumeSpecName: "inventory") pod "141d36f8-e9f9-4959-8f0c-09c649350547" (UID: "141d36f8-e9f9-4959-8f0c-09c649350547"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.467317 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141d36f8-e9f9-4959-8f0c-09c649350547-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.812683 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" event={"ID":"141d36f8-e9f9-4959-8f0c-09c649350547","Type":"ContainerDied","Data":"2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848"} Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.812734 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed4c51f485003da24aee0aa78dd2c2948576ddba7d7975ab4898077bb74f848" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.812803 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.926188 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6"] Oct 14 10:36:13 crc kubenswrapper[4698]: E1014 10:36:13.927051 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="extract-content" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927078 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="extract-content" Oct 14 10:36:13 crc kubenswrapper[4698]: E1014 10:36:13.927097 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141d36f8-e9f9-4959-8f0c-09c649350547" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927107 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="141d36f8-e9f9-4959-8f0c-09c649350547" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 10:36:13 crc kubenswrapper[4698]: E1014 10:36:13.927132 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="registry-server" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927141 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="registry-server" Oct 14 10:36:13 crc kubenswrapper[4698]: E1014 10:36:13.927201 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="extract-utilities" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927210 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="extract-utilities" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927416 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="141d36f8-e9f9-4959-8f0c-09c649350547" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.927441 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80acc04-5fc0-4b28-a44b-6dafa9fe47c2" containerName="registry-server" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.928139 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.930122 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.930322 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.930695 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.930905 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.931437 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.931682 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.932395 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 14 10:36:13 crc kubenswrapper[4698]: I1014 10:36:13.946729 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6"] Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.100525 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwzb\" (UniqueName: \"kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.100826 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.100906 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.100942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.101080 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.101206 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.101266 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.101422 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.101461 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.202906 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpwzb\" (UniqueName: \"kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.202971 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.202999 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.203016 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.203057 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.203187 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.203791 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.204121 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.204204 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.204227 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.207886 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.207908 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.208389 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.209170 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.210048 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.215374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.221264 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.223459 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpwzb\" (UniqueName: \"kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pt8b6\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.259359 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.791292 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6"] Oct 14 10:36:14 crc kubenswrapper[4698]: I1014 10:36:14.823515 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" event={"ID":"b25db8a8-2e32-4634-b5e6-b21d7497c0ca","Type":"ContainerStarted","Data":"3ca57aa3dffc5f6945c2d69701ab9db5fa066b0a4b041e8d0bd14059e423867a"} Oct 14 10:36:15 crc kubenswrapper[4698]: I1014 10:36:15.836431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" event={"ID":"b25db8a8-2e32-4634-b5e6-b21d7497c0ca","Type":"ContainerStarted","Data":"4b1e2909f1279db8b64e74fe52a1f1c04efdef6c0090a898bc5e1749e5befbfc"} Oct 14 10:36:15 crc kubenswrapper[4698]: I1014 10:36:15.877192 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" podStartSLOduration=2.330530232 podStartE2EDuration="2.877165226s" podCreationTimestamp="2025-10-14 10:36:13 +0000 UTC" firstStartedPulling="2025-10-14 10:36:14.799545061 +0000 UTC m=+2356.496844477" lastFinishedPulling="2025-10-14 10:36:15.346180045 +0000 UTC m=+2357.043479471" observedRunningTime="2025-10-14 10:36:15.86937528 +0000 UTC m=+2357.566674726" watchObservedRunningTime="2025-10-14 10:36:15.877165226 +0000 UTC m=+2357.574464652" Oct 14 10:36:23 crc kubenswrapper[4698]: I1014 10:36:23.909055 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:36:23 crc kubenswrapper[4698]: I1014 10:36:23.910035 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:36:53 crc kubenswrapper[4698]: I1014 10:36:53.908887 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:36:53 crc kubenswrapper[4698]: I1014 10:36:53.909709 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:36:53 crc kubenswrapper[4698]: I1014 10:36:53.909866 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:36:53 crc kubenswrapper[4698]: I1014 10:36:53.910965 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:36:53 crc kubenswrapper[4698]: I1014 10:36:53.911064 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" gracePeriod=600 Oct 14 10:36:54 crc kubenswrapper[4698]: E1014 10:36:54.042517 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:36:54 crc kubenswrapper[4698]: I1014 10:36:54.306232 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" exitCode=0 Oct 14 10:36:54 crc kubenswrapper[4698]: I1014 10:36:54.306297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8"} Oct 14 10:36:54 crc kubenswrapper[4698]: I1014 10:36:54.306339 4698 scope.go:117] "RemoveContainer" containerID="767e1d443156381bb7f70fbe387aad4a1fb034afb977892926a7c60b7cf8b968" Oct 14 10:36:54 crc kubenswrapper[4698]: I1014 10:36:54.307242 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:36:54 crc kubenswrapper[4698]: E1014 10:36:54.307687 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:37:09 crc kubenswrapper[4698]: I1014 10:37:09.024272 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:37:09 crc kubenswrapper[4698]: E1014 10:37:09.025099 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:37:24 crc kubenswrapper[4698]: I1014 10:37:24.017300 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:37:24 crc kubenswrapper[4698]: E1014 10:37:24.018115 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:37:36 crc kubenswrapper[4698]: I1014 10:37:36.017565 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:37:36 crc kubenswrapper[4698]: E1014 10:37:36.020165 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:37:50 crc kubenswrapper[4698]: I1014 10:37:50.018011 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:37:50 crc kubenswrapper[4698]: E1014 10:37:50.019505 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:38:05 crc kubenswrapper[4698]: I1014 10:38:05.017158 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:38:05 crc kubenswrapper[4698]: E1014 10:38:05.017940 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:38:16 crc kubenswrapper[4698]: I1014 10:38:16.017368 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:38:16 crc kubenswrapper[4698]: E1014 10:38:16.018440 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:38:31 crc kubenswrapper[4698]: I1014 10:38:31.017868 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:38:31 crc kubenswrapper[4698]: E1014 10:38:31.018869 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:38:46 crc kubenswrapper[4698]: I1014 10:38:46.016723 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:38:46 crc kubenswrapper[4698]: E1014 10:38:46.018863 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:39:01 crc kubenswrapper[4698]: I1014 10:39:01.017233 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:39:01 crc kubenswrapper[4698]: E1014 10:39:01.018408 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:39:12 crc kubenswrapper[4698]: I1014 10:39:12.018149 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:39:12 crc kubenswrapper[4698]: E1014 10:39:12.019005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:39:27 crc kubenswrapper[4698]: I1014 10:39:27.017444 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:39:27 crc kubenswrapper[4698]: E1014 10:39:27.018353 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:39:28 crc kubenswrapper[4698]: I1014 10:39:28.840890 4698 generic.go:334] "Generic (PLEG): container finished" podID="b25db8a8-2e32-4634-b5e6-b21d7497c0ca" containerID="4b1e2909f1279db8b64e74fe52a1f1c04efdef6c0090a898bc5e1749e5befbfc" exitCode=0 Oct 14 10:39:28 crc kubenswrapper[4698]: I1014 10:39:28.840994 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" event={"ID":"b25db8a8-2e32-4634-b5e6-b21d7497c0ca","Type":"ContainerDied","Data":"4b1e2909f1279db8b64e74fe52a1f1c04efdef6c0090a898bc5e1749e5befbfc"} Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.351268 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.540661 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpwzb\" (UniqueName: \"kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.540796 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.540820 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541802 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541849 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541869 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541909 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541953 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.541976 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key\") pod \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\" (UID: \"b25db8a8-2e32-4634-b5e6-b21d7497c0ca\") " Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.547848 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.549053 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb" (OuterVolumeSpecName: "kube-api-access-kpwzb") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "kube-api-access-kpwzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.573458 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.585189 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory" (OuterVolumeSpecName: "inventory") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.590597 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.600063 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.604228 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.606207 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.612215 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b25db8a8-2e32-4634-b5e6-b21d7497c0ca" (UID: "b25db8a8-2e32-4634-b5e6-b21d7497c0ca"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644080 4698 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644116 4698 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644131 4698 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644146 4698 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644158 4698 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644169 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644179 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644191 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpwzb\" (UniqueName: \"kubernetes.io/projected/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-kube-api-access-kpwzb\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.644203 4698 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b25db8a8-2e32-4634-b5e6-b21d7497c0ca-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.859291 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" event={"ID":"b25db8a8-2e32-4634-b5e6-b21d7497c0ca","Type":"ContainerDied","Data":"3ca57aa3dffc5f6945c2d69701ab9db5fa066b0a4b041e8d0bd14059e423867a"} Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.859617 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca57aa3dffc5f6945c2d69701ab9db5fa066b0a4b041e8d0bd14059e423867a" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.859395 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pt8b6" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.955266 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf"] Oct 14 10:39:30 crc kubenswrapper[4698]: E1014 10:39:30.956026 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25db8a8-2e32-4634-b5e6-b21d7497c0ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.956047 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25db8a8-2e32-4634-b5e6-b21d7497c0ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.956227 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25db8a8-2e32-4634-b5e6-b21d7497c0ca" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.956910 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.958716 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.958957 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.959357 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.959508 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.963403 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5blv" Oct 14 10:39:30 crc kubenswrapper[4698]: I1014 10:39:30.969711 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf"] Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153212 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153266 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153403 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153449 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153504 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153566 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.153606 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.256166 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.256345 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.257339 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.257458 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.257490 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.257527 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.257598 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.260807 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.261039 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.261101 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.262342 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.262548 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.263007 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.280189 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pstgf\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.282222 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.769457 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf"] Oct 14 10:39:31 crc kubenswrapper[4698]: I1014 10:39:31.868806 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" event={"ID":"45519f65-bf50-47f3-a645-8d64d05ab523","Type":"ContainerStarted","Data":"d01783351fa239bc0404eb0b4b8421231ec88a275ba0ee484df5f7b1103f8be7"} Oct 14 10:39:32 crc kubenswrapper[4698]: I1014 10:39:32.880786 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" event={"ID":"45519f65-bf50-47f3-a645-8d64d05ab523","Type":"ContainerStarted","Data":"b40ff382baebb28ab676a4a371c233bb51079924ef2db5d1985cccc442a03854"} Oct 14 10:39:32 crc kubenswrapper[4698]: I1014 10:39:32.918958 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" podStartSLOduration=2.401726589 podStartE2EDuration="2.918933176s" podCreationTimestamp="2025-10-14 10:39:30 +0000 UTC" firstStartedPulling="2025-10-14 10:39:31.775779447 +0000 UTC m=+2553.473078863" lastFinishedPulling="2025-10-14 10:39:32.292986044 +0000 UTC m=+2553.990285450" observedRunningTime="2025-10-14 10:39:32.903900615 +0000 UTC m=+2554.601200051" watchObservedRunningTime="2025-10-14 10:39:32.918933176 +0000 UTC m=+2554.616232602" Oct 14 10:39:40 crc kubenswrapper[4698]: I1014 10:39:40.017448 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:39:40 crc kubenswrapper[4698]: E1014 10:39:40.018393 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:39:54 crc kubenswrapper[4698]: I1014 10:39:54.017663 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:39:54 crc kubenswrapper[4698]: E1014 10:39:54.018510 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:40:06 crc kubenswrapper[4698]: I1014 10:40:06.017872 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:40:06 crc kubenswrapper[4698]: E1014 10:40:06.019177 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:40:19 crc kubenswrapper[4698]: I1014 10:40:19.029537 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:40:19 crc kubenswrapper[4698]: E1014 10:40:19.030554 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:40:34 crc kubenswrapper[4698]: I1014 10:40:34.017048 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:40:34 crc kubenswrapper[4698]: E1014 10:40:34.017991 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:40:46 crc kubenswrapper[4698]: I1014 10:40:46.016934 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:40:46 crc kubenswrapper[4698]: E1014 10:40:46.018105 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:41:01 crc kubenswrapper[4698]: I1014 10:41:01.017432 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:41:01 crc kubenswrapper[4698]: E1014 10:41:01.018844 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:41:15 crc kubenswrapper[4698]: I1014 10:41:15.018322 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:41:15 crc kubenswrapper[4698]: E1014 10:41:15.019138 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:41:26 crc kubenswrapper[4698]: I1014 10:41:26.017840 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:41:26 crc kubenswrapper[4698]: E1014 10:41:26.018700 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:41:38 crc kubenswrapper[4698]: I1014 10:41:38.017376 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:41:38 crc kubenswrapper[4698]: E1014 10:41:38.019244 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:41:53 crc kubenswrapper[4698]: I1014 10:41:53.017343 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:41:53 crc kubenswrapper[4698]: E1014 10:41:53.018463 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:42:00 crc kubenswrapper[4698]: I1014 10:42:00.421236 4698 generic.go:334] "Generic (PLEG): container finished" podID="45519f65-bf50-47f3-a645-8d64d05ab523" containerID="b40ff382baebb28ab676a4a371c233bb51079924ef2db5d1985cccc442a03854" exitCode=0 Oct 14 10:42:00 crc kubenswrapper[4698]: I1014 10:42:00.421357 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" event={"ID":"45519f65-bf50-47f3-a645-8d64d05ab523","Type":"ContainerDied","Data":"b40ff382baebb28ab676a4a371c233bb51079924ef2db5d1985cccc442a03854"} Oct 14 10:42:01 crc kubenswrapper[4698]: I1014 10:42:01.884266 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058429 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058492 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058678 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058759 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058819 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058864 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.058933 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1\") pod \"45519f65-bf50-47f3-a645-8d64d05ab523\" (UID: \"45519f65-bf50-47f3-a645-8d64d05ab523\") " Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.069125 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.072026 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2" (OuterVolumeSpecName: "kube-api-access-776m2") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "kube-api-access-776m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.087560 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory" (OuterVolumeSpecName: "inventory") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.088451 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.089471 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.090650 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.094399 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "45519f65-bf50-47f3-a645-8d64d05ab523" (UID: "45519f65-bf50-47f3-a645-8d64d05ab523"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.161487 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162033 4698 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162045 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162056 4698 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162064 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162076 4698 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/45519f65-bf50-47f3-a645-8d64d05ab523-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.162111 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/45519f65-bf50-47f3-a645-8d64d05ab523-kube-api-access-776m2\") on node \"crc\" DevicePath \"\"" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.444084 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" event={"ID":"45519f65-bf50-47f3-a645-8d64d05ab523","Type":"ContainerDied","Data":"d01783351fa239bc0404eb0b4b8421231ec88a275ba0ee484df5f7b1103f8be7"} Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.444137 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01783351fa239bc0404eb0b4b8421231ec88a275ba0ee484df5f7b1103f8be7" Oct 14 10:42:02 crc kubenswrapper[4698]: I1014 10:42:02.444217 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pstgf" Oct 14 10:42:06 crc kubenswrapper[4698]: I1014 10:42:06.017157 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:42:06 crc kubenswrapper[4698]: I1014 10:42:06.479830 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b"} Oct 14 10:42:42 crc kubenswrapper[4698]: E1014 10:42:42.652052 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:54840->38.102.83.188:44569: write tcp 38.102.83.188:54840->38.102.83.188:44569: write: broken pipe Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.174192 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 10:43:07 crc kubenswrapper[4698]: E1014 10:43:07.176171 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45519f65-bf50-47f3-a645-8d64d05ab523" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.176196 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="45519f65-bf50-47f3-a645-8d64d05ab523" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.176420 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="45519f65-bf50-47f3-a645-8d64d05ab523" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.177355 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.179992 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.180349 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.180552 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.189389 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.263588 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.263971 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.264008 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.264030 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.264142 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.264679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.265005 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.265170 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.265244 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ml2f\" (UniqueName: \"kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367392 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367444 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367481 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367511 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367541 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ml2f\" (UniqueName: \"kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367569 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367589 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367620 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.367641 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.368564 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.368713 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.369561 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.369894 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.371495 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.374340 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.374869 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.377938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.388980 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ml2f\" (UniqueName: \"kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.405900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " pod="openstack/tempest-tests-tempest" Oct 14 10:43:07 crc kubenswrapper[4698]: I1014 10:43:07.512214 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 10:43:08 crc kubenswrapper[4698]: W1014 10:43:08.007376 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a71af4_fdf3_4a49_9ada_2d4836409022.slice/crio-1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc WatchSource:0}: Error finding container 1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc: Status 404 returned error can't find the container with id 1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc Oct 14 10:43:08 crc kubenswrapper[4698]: I1014 10:43:08.007648 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 10:43:08 crc kubenswrapper[4698]: I1014 10:43:08.010732 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:43:08 crc kubenswrapper[4698]: I1014 10:43:08.066088 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a71af4-fdf3-4a49-9ada-2d4836409022","Type":"ContainerStarted","Data":"1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc"} Oct 14 10:43:37 crc kubenswrapper[4698]: E1014 10:43:37.107942 4698 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 14 10:43:37 crc kubenswrapper[4698]: E1014 10:43:37.108733 4698 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ml2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e5a71af4-fdf3-4a49-9ada-2d4836409022): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 10:43:37 crc kubenswrapper[4698]: E1014 10:43:37.110232 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e5a71af4-fdf3-4a49-9ada-2d4836409022" Oct 14 10:43:37 crc kubenswrapper[4698]: E1014 10:43:37.355005 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e5a71af4-fdf3-4a49-9ada-2d4836409022" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.171605 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.174446 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.195724 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.286549 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.286621 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jh6\" (UniqueName: \"kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.286728 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.388155 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8jh6\" (UniqueName: \"kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.388300 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.388352 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.388929 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.389367 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.408753 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8jh6\" (UniqueName: \"kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6\") pod \"certified-operators-2f4rn\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:41 crc kubenswrapper[4698]: I1014 10:43:41.506936 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:42 crc kubenswrapper[4698]: I1014 10:43:42.052894 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:42 crc kubenswrapper[4698]: I1014 10:43:42.394648 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerID="c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639" exitCode=0 Oct 14 10:43:42 crc kubenswrapper[4698]: I1014 10:43:42.394950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerDied","Data":"c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639"} Oct 14 10:43:42 crc kubenswrapper[4698]: I1014 10:43:42.396217 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerStarted","Data":"896af201828e239c8e13057e867786a7609ab7f661d074cce87734c17c64ce38"} Oct 14 10:43:44 crc kubenswrapper[4698]: I1014 10:43:44.415328 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerStarted","Data":"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca"} Oct 14 10:43:45 crc kubenswrapper[4698]: I1014 10:43:45.427386 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerID="32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca" exitCode=0 Oct 14 10:43:45 crc kubenswrapper[4698]: I1014 10:43:45.427498 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerDied","Data":"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca"} Oct 14 10:43:46 crc kubenswrapper[4698]: I1014 10:43:46.438948 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerStarted","Data":"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4"} Oct 14 10:43:46 crc kubenswrapper[4698]: I1014 10:43:46.463469 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2f4rn" podStartSLOduration=1.810943268 podStartE2EDuration="5.463449609s" podCreationTimestamp="2025-10-14 10:43:41 +0000 UTC" firstStartedPulling="2025-10-14 10:43:42.396857337 +0000 UTC m=+2804.094156753" lastFinishedPulling="2025-10-14 10:43:46.049363678 +0000 UTC m=+2807.746663094" observedRunningTime="2025-10-14 10:43:46.457550094 +0000 UTC m=+2808.154849530" watchObservedRunningTime="2025-10-14 10:43:46.463449609 +0000 UTC m=+2808.160749025" Oct 14 10:43:51 crc kubenswrapper[4698]: I1014 10:43:51.508127 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:51 crc kubenswrapper[4698]: I1014 10:43:51.508398 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:51 crc kubenswrapper[4698]: I1014 10:43:51.565338 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:52 crc kubenswrapper[4698]: I1014 10:43:52.501195 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a71af4-fdf3-4a49-9ada-2d4836409022","Type":"ContainerStarted","Data":"4e5bec56a9447a709e1ddb26fd106cba4e0c71a850fff4cee80848beff0c4b43"} Oct 14 10:43:52 crc kubenswrapper[4698]: I1014 10:43:52.529209 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.83545733 podStartE2EDuration="46.529188591s" podCreationTimestamp="2025-10-14 10:43:06 +0000 UTC" firstStartedPulling="2025-10-14 10:43:08.010327651 +0000 UTC m=+2769.707627117" lastFinishedPulling="2025-10-14 10:43:50.704058952 +0000 UTC m=+2812.401358378" observedRunningTime="2025-10-14 10:43:52.525498007 +0000 UTC m=+2814.222797433" watchObservedRunningTime="2025-10-14 10:43:52.529188591 +0000 UTC m=+2814.226488017" Oct 14 10:43:52 crc kubenswrapper[4698]: I1014 10:43:52.559652 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:52 crc kubenswrapper[4698]: I1014 10:43:52.605740 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:54 crc kubenswrapper[4698]: I1014 10:43:54.544110 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2f4rn" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="registry-server" containerID="cri-o://baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4" gracePeriod=2 Oct 14 10:43:54 crc kubenswrapper[4698]: E1014 10:43:54.780110 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8e7ed0_4155_4886_a5bf_f64a8b9d344b.slice/crio-baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa8e7ed0_4155_4886_a5bf_f64a8b9d344b.slice/crio-conmon-baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4.scope\": RecentStats: unable to find data in memory cache]" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.087169 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.215856 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8jh6\" (UniqueName: \"kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6\") pod \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.215948 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities\") pod \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.216109 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content\") pod \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\" (UID: \"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b\") " Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.217035 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities" (OuterVolumeSpecName: "utilities") pod "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" (UID: "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.222856 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6" (OuterVolumeSpecName: "kube-api-access-z8jh6") pod "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" (UID: "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b"). InnerVolumeSpecName "kube-api-access-z8jh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.263025 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" (UID: "aa8e7ed0-4155-4886-a5bf-f64a8b9d344b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.319278 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.319364 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8jh6\" (UniqueName: \"kubernetes.io/projected/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-kube-api-access-z8jh6\") on node \"crc\" DevicePath \"\"" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.319384 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.553286 4698 generic.go:334] "Generic (PLEG): container finished" podID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerID="baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4" exitCode=0 Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.553367 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f4rn" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.553364 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerDied","Data":"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4"} Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.553474 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f4rn" event={"ID":"aa8e7ed0-4155-4886-a5bf-f64a8b9d344b","Type":"ContainerDied","Data":"896af201828e239c8e13057e867786a7609ab7f661d074cce87734c17c64ce38"} Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.553539 4698 scope.go:117] "RemoveContainer" containerID="baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.596574 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.605975 4698 scope.go:117] "RemoveContainer" containerID="32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.606462 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2f4rn"] Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.637503 4698 scope.go:117] "RemoveContainer" containerID="c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.703889 4698 scope.go:117] "RemoveContainer" containerID="baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4" Oct 14 10:43:55 crc kubenswrapper[4698]: E1014 10:43:55.704425 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4\": container with ID starting with baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4 not found: ID does not exist" containerID="baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.704458 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4"} err="failed to get container status \"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4\": rpc error: code = NotFound desc = could not find container \"baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4\": container with ID starting with baa20c9f34f61cc6463b9cc884809eb2cd1fb2c3f95a4f948692203fdeac79f4 not found: ID does not exist" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.704480 4698 scope.go:117] "RemoveContainer" containerID="32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca" Oct 14 10:43:55 crc kubenswrapper[4698]: E1014 10:43:55.704855 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca\": container with ID starting with 32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca not found: ID does not exist" containerID="32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.704878 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca"} err="failed to get container status \"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca\": rpc error: code = NotFound desc = could not find container \"32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca\": container with ID starting with 32f802d84a1e9df1e17703c1336893e8be60c01f387dde301dbc85b6a002bcca not found: ID does not exist" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.704893 4698 scope.go:117] "RemoveContainer" containerID="c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639" Oct 14 10:43:55 crc kubenswrapper[4698]: E1014 10:43:55.705143 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639\": container with ID starting with c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639 not found: ID does not exist" containerID="c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639" Oct 14 10:43:55 crc kubenswrapper[4698]: I1014 10:43:55.705162 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639"} err="failed to get container status \"c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639\": rpc error: code = NotFound desc = could not find container \"c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639\": container with ID starting with c77b85f4e7d99ec1ed2f41364027522f63be0d68f0edba94569601f1effd5639 not found: ID does not exist" Oct 14 10:43:57 crc kubenswrapper[4698]: I1014 10:43:57.038883 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" path="/var/lib/kubelet/pods/aa8e7ed0-4155-4886-a5bf-f64a8b9d344b/volumes" Oct 14 10:44:23 crc kubenswrapper[4698]: I1014 10:44:23.907787 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:44:23 crc kubenswrapper[4698]: I1014 10:44:23.908346 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:44:53 crc kubenswrapper[4698]: I1014 10:44:53.908165 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:44:53 crc kubenswrapper[4698]: I1014 10:44:53.908732 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.146240 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46"] Oct 14 10:45:00 crc kubenswrapper[4698]: E1014 10:45:00.147572 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="extract-content" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.147590 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="extract-content" Oct 14 10:45:00 crc kubenswrapper[4698]: E1014 10:45:00.147602 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="extract-utilities" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.147608 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="extract-utilities" Oct 14 10:45:00 crc kubenswrapper[4698]: E1014 10:45:00.147618 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="registry-server" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.147623 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="registry-server" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.147906 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8e7ed0-4155-4886-a5bf-f64a8b9d344b" containerName="registry-server" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.148831 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.151032 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.151262 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.160536 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46"] Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.319267 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpzd\" (UniqueName: \"kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.319849 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.319905 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.422228 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.422280 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.422321 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frpzd\" (UniqueName: \"kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.423515 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.437079 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.440896 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frpzd\" (UniqueName: \"kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd\") pod \"collect-profiles-29340645-zzj46\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:00 crc kubenswrapper[4698]: I1014 10:45:00.485835 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:01 crc kubenswrapper[4698]: I1014 10:45:01.119104 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46"] Oct 14 10:45:01 crc kubenswrapper[4698]: I1014 10:45:01.184642 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" event={"ID":"7437e4cf-c81c-4ab5-a563-29517a0b5475","Type":"ContainerStarted","Data":"10371c09cdc1d64d8643ab9aec04bfcb0ccd56df569e48d10320e6417eab9910"} Oct 14 10:45:02 crc kubenswrapper[4698]: I1014 10:45:02.194894 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" event={"ID":"7437e4cf-c81c-4ab5-a563-29517a0b5475","Type":"ContainerStarted","Data":"e61a4c15551adc9569c1d3dc2e5cf962f32efe0841c88ce6e4ce9237b216e6a0"} Oct 14 10:45:02 crc kubenswrapper[4698]: I1014 10:45:02.216785 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" podStartSLOduration=2.21675397 podStartE2EDuration="2.21675397s" podCreationTimestamp="2025-10-14 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:45:02.214523338 +0000 UTC m=+2883.911822764" watchObservedRunningTime="2025-10-14 10:45:02.21675397 +0000 UTC m=+2883.914053386" Oct 14 10:45:03 crc kubenswrapper[4698]: I1014 10:45:03.227454 4698 generic.go:334] "Generic (PLEG): container finished" podID="7437e4cf-c81c-4ab5-a563-29517a0b5475" containerID="e61a4c15551adc9569c1d3dc2e5cf962f32efe0841c88ce6e4ce9237b216e6a0" exitCode=0 Oct 14 10:45:03 crc kubenswrapper[4698]: I1014 10:45:03.227794 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" event={"ID":"7437e4cf-c81c-4ab5-a563-29517a0b5475","Type":"ContainerDied","Data":"e61a4c15551adc9569c1d3dc2e5cf962f32efe0841c88ce6e4ce9237b216e6a0"} Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.811954 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.909219 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume\") pod \"7437e4cf-c81c-4ab5-a563-29517a0b5475\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.909355 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume\") pod \"7437e4cf-c81c-4ab5-a563-29517a0b5475\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.909532 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frpzd\" (UniqueName: \"kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd\") pod \"7437e4cf-c81c-4ab5-a563-29517a0b5475\" (UID: \"7437e4cf-c81c-4ab5-a563-29517a0b5475\") " Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.910944 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume" (OuterVolumeSpecName: "config-volume") pod "7437e4cf-c81c-4ab5-a563-29517a0b5475" (UID: "7437e4cf-c81c-4ab5-a563-29517a0b5475"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.937627 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7437e4cf-c81c-4ab5-a563-29517a0b5475" (UID: "7437e4cf-c81c-4ab5-a563-29517a0b5475"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:45:04 crc kubenswrapper[4698]: I1014 10:45:04.944515 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd" (OuterVolumeSpecName: "kube-api-access-frpzd") pod "7437e4cf-c81c-4ab5-a563-29517a0b5475" (UID: "7437e4cf-c81c-4ab5-a563-29517a0b5475"). InnerVolumeSpecName "kube-api-access-frpzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.011891 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frpzd\" (UniqueName: \"kubernetes.io/projected/7437e4cf-c81c-4ab5-a563-29517a0b5475-kube-api-access-frpzd\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.011939 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7437e4cf-c81c-4ab5-a563-29517a0b5475-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.011952 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7437e4cf-c81c-4ab5-a563-29517a0b5475-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.248263 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" event={"ID":"7437e4cf-c81c-4ab5-a563-29517a0b5475","Type":"ContainerDied","Data":"10371c09cdc1d64d8643ab9aec04bfcb0ccd56df569e48d10320e6417eab9910"} Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.248305 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10371c09cdc1d64d8643ab9aec04bfcb0ccd56df569e48d10320e6417eab9910" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.248357 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340645-zzj46" Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.288968 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj"] Oct 14 10:45:05 crc kubenswrapper[4698]: I1014 10:45:05.297346 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-6tccj"] Oct 14 10:45:07 crc kubenswrapper[4698]: I1014 10:45:07.032199 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121bbec2-1aed-4e03-b35c-1c93b5dbddd2" path="/var/lib/kubelet/pods/121bbec2-1aed-4e03-b35c-1c93b5dbddd2/volumes" Oct 14 10:45:11 crc kubenswrapper[4698]: I1014 10:45:11.541734 4698 scope.go:117] "RemoveContainer" containerID="41d54af16289f8cafe6cb08a4797c7a589689d8568aeb4fbd04d8a1505495907" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.050176 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:23 crc kubenswrapper[4698]: E1014 10:45:23.051129 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7437e4cf-c81c-4ab5-a563-29517a0b5475" containerName="collect-profiles" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.051142 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7437e4cf-c81c-4ab5-a563-29517a0b5475" containerName="collect-profiles" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.051331 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7437e4cf-c81c-4ab5-a563-29517a0b5475" containerName="collect-profiles" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.052707 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.078335 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.221968 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.222405 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrwdl\" (UniqueName: \"kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.222557 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.325324 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.325662 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrwdl\" (UniqueName: \"kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.325877 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.326001 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.326410 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.344989 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrwdl\" (UniqueName: \"kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl\") pod \"redhat-operators-gf462\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.391146 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.898319 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.908479 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.908515 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.908550 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.909155 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:45:23 crc kubenswrapper[4698]: I1014 10:45:23.909210 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b" gracePeriod=600 Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.406385 4698 generic.go:334] "Generic (PLEG): container finished" podID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerID="6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86" exitCode=0 Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.406442 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerDied","Data":"6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86"} Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.406721 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerStarted","Data":"dad045ffde4c3906bf23d7c165383d235698df385c931ae9a9620efb730fc6e7"} Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.410741 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b" exitCode=0 Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.410796 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b"} Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.410867 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c"} Oct 14 10:45:24 crc kubenswrapper[4698]: I1014 10:45:24.410901 4698 scope.go:117] "RemoveContainer" containerID="713fd828a91f058b8f8f5d1a0505fb27eb15167a5507115b649af61ccd244ca8" Oct 14 10:45:25 crc kubenswrapper[4698]: I1014 10:45:25.424703 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerStarted","Data":"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e"} Oct 14 10:45:29 crc kubenswrapper[4698]: I1014 10:45:29.463368 4698 generic.go:334] "Generic (PLEG): container finished" podID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerID="a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e" exitCode=0 Oct 14 10:45:29 crc kubenswrapper[4698]: I1014 10:45:29.463431 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerDied","Data":"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e"} Oct 14 10:45:30 crc kubenswrapper[4698]: I1014 10:45:30.474391 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerStarted","Data":"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1"} Oct 14 10:45:30 crc kubenswrapper[4698]: I1014 10:45:30.499888 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gf462" podStartSLOduration=2.026871661 podStartE2EDuration="7.499871391s" podCreationTimestamp="2025-10-14 10:45:23 +0000 UTC" firstStartedPulling="2025-10-14 10:45:24.408481191 +0000 UTC m=+2906.105780607" lastFinishedPulling="2025-10-14 10:45:29.881480921 +0000 UTC m=+2911.578780337" observedRunningTime="2025-10-14 10:45:30.4959258 +0000 UTC m=+2912.193225216" watchObservedRunningTime="2025-10-14 10:45:30.499871391 +0000 UTC m=+2912.197170807" Oct 14 10:45:33 crc kubenswrapper[4698]: I1014 10:45:33.391886 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:33 crc kubenswrapper[4698]: I1014 10:45:33.392900 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:34 crc kubenswrapper[4698]: I1014 10:45:34.441903 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gf462" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="registry-server" probeResult="failure" output=< Oct 14 10:45:34 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:45:34 crc kubenswrapper[4698]: > Oct 14 10:45:43 crc kubenswrapper[4698]: I1014 10:45:43.452320 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:43 crc kubenswrapper[4698]: I1014 10:45:43.507436 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:43 crc kubenswrapper[4698]: I1014 10:45:43.692674 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:44 crc kubenswrapper[4698]: I1014 10:45:44.592779 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gf462" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="registry-server" containerID="cri-o://3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1" gracePeriod=2 Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.421357 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.572305 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrwdl\" (UniqueName: \"kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl\") pod \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.572395 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities\") pod \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.572533 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content\") pod \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\" (UID: \"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb\") " Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.573334 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities" (OuterVolumeSpecName: "utilities") pod "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" (UID: "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.580301 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl" (OuterVolumeSpecName: "kube-api-access-nrwdl") pod "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" (UID: "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb"). InnerVolumeSpecName "kube-api-access-nrwdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.603508 4698 generic.go:334] "Generic (PLEG): container finished" podID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerID="3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1" exitCode=0 Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.603568 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerDied","Data":"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1"} Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.603638 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gf462" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.604503 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gf462" event={"ID":"c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb","Type":"ContainerDied","Data":"dad045ffde4c3906bf23d7c165383d235698df385c931ae9a9620efb730fc6e7"} Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.604644 4698 scope.go:117] "RemoveContainer" containerID="3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.647715 4698 scope.go:117] "RemoveContainer" containerID="a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.655214 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" (UID: "c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.672458 4698 scope.go:117] "RemoveContainer" containerID="6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.675040 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrwdl\" (UniqueName: \"kubernetes.io/projected/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-kube-api-access-nrwdl\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.675064 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.675074 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.713765 4698 scope.go:117] "RemoveContainer" containerID="3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1" Oct 14 10:45:45 crc kubenswrapper[4698]: E1014 10:45:45.714277 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1\": container with ID starting with 3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1 not found: ID does not exist" containerID="3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.714388 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1"} err="failed to get container status \"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1\": rpc error: code = NotFound desc = could not find container \"3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1\": container with ID starting with 3787df921bfe345598ae568dde2dd925205ea3e90bf1cbf59707dd691fcb9bc1 not found: ID does not exist" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.714484 4698 scope.go:117] "RemoveContainer" containerID="a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e" Oct 14 10:45:45 crc kubenswrapper[4698]: E1014 10:45:45.714847 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e\": container with ID starting with a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e not found: ID does not exist" containerID="a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.714869 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e"} err="failed to get container status \"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e\": rpc error: code = NotFound desc = could not find container \"a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e\": container with ID starting with a0b88fbf3ea34beae9c2c06e867c199bc88f3d6129724d117a08ab4ab461bd7e not found: ID does not exist" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.714884 4698 scope.go:117] "RemoveContainer" containerID="6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86" Oct 14 10:45:45 crc kubenswrapper[4698]: E1014 10:45:45.715183 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86\": container with ID starting with 6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86 not found: ID does not exist" containerID="6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.715264 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86"} err="failed to get container status \"6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86\": rpc error: code = NotFound desc = could not find container \"6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86\": container with ID starting with 6252c9b08f85cbbe05b5dfa1f785616abddbcf92fff73de0b8c02696aa08ea86 not found: ID does not exist" Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.937498 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:45 crc kubenswrapper[4698]: I1014 10:45:45.944537 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gf462"] Oct 14 10:45:47 crc kubenswrapper[4698]: I1014 10:45:47.028802 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" path="/var/lib/kubelet/pods/c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb/volumes" Oct 14 10:47:53 crc kubenswrapper[4698]: I1014 10:47:53.908576 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:47:53 crc kubenswrapper[4698]: I1014 10:47:53.909376 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:48:23 crc kubenswrapper[4698]: I1014 10:48:23.908821 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:48:23 crc kubenswrapper[4698]: I1014 10:48:23.909468 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:48:53 crc kubenswrapper[4698]: I1014 10:48:53.908009 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:48:53 crc kubenswrapper[4698]: I1014 10:48:53.908543 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:48:53 crc kubenswrapper[4698]: I1014 10:48:53.908589 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:48:53 crc kubenswrapper[4698]: I1014 10:48:53.909155 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:48:53 crc kubenswrapper[4698]: I1014 10:48:53.909232 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" gracePeriod=600 Oct 14 10:48:54 crc kubenswrapper[4698]: E1014 10:48:54.030056 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:48:54 crc kubenswrapper[4698]: I1014 10:48:54.287041 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" exitCode=0 Oct 14 10:48:54 crc kubenswrapper[4698]: I1014 10:48:54.287096 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c"} Oct 14 10:48:54 crc kubenswrapper[4698]: I1014 10:48:54.287136 4698 scope.go:117] "RemoveContainer" containerID="e610d29adee37c5fe425db4b50b070ff822000d8418cacd4862506db9590760b" Oct 14 10:48:54 crc kubenswrapper[4698]: I1014 10:48:54.287936 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:48:54 crc kubenswrapper[4698]: E1014 10:48:54.288282 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.812910 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:48:55 crc kubenswrapper[4698]: E1014 10:48:55.814194 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="extract-content" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.814211 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="extract-content" Oct 14 10:48:55 crc kubenswrapper[4698]: E1014 10:48:55.814232 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="registry-server" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.814239 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="registry-server" Oct 14 10:48:55 crc kubenswrapper[4698]: E1014 10:48:55.814253 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="extract-utilities" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.814260 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="extract-utilities" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.814540 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ab6f3c-a862-42bd-853e-b9b54dbeb7eb" containerName="registry-server" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.818973 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.824101 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.871546 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.871608 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.871666 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6mz\" (UniqueName: \"kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.974239 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.974301 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.974346 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6mz\" (UniqueName: \"kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.975146 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.975390 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:55 crc kubenswrapper[4698]: I1014 10:48:55.996136 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6mz\" (UniqueName: \"kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz\") pod \"community-operators-xkj8d\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:56 crc kubenswrapper[4698]: I1014 10:48:56.168191 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:48:56 crc kubenswrapper[4698]: I1014 10:48:56.712539 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:48:57 crc kubenswrapper[4698]: I1014 10:48:57.316222 4698 generic.go:334] "Generic (PLEG): container finished" podID="7dc522d4-aacd-436e-95f1-9985817ae097" containerID="010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f" exitCode=0 Oct 14 10:48:57 crc kubenswrapper[4698]: I1014 10:48:57.316340 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerDied","Data":"010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f"} Oct 14 10:48:57 crc kubenswrapper[4698]: I1014 10:48:57.316495 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerStarted","Data":"f8a21d5d5bd43164b03339a0dafbfff93de171ee8c76f2384ba08547ee4d1d3b"} Oct 14 10:48:57 crc kubenswrapper[4698]: I1014 10:48:57.319978 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:48:59 crc kubenswrapper[4698]: I1014 10:48:59.346283 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerStarted","Data":"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9"} Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.001727 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.004107 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.029722 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.066899 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.066942 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947pv\" (UniqueName: \"kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.067085 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.168973 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.169138 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.169159 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947pv\" (UniqueName: \"kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.169514 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.169833 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.189656 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947pv\" (UniqueName: \"kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv\") pod \"redhat-marketplace-5l8db\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.329491 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.361050 4698 generic.go:334] "Generic (PLEG): container finished" podID="7dc522d4-aacd-436e-95f1-9985817ae097" containerID="bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9" exitCode=0 Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.361110 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerDied","Data":"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9"} Oct 14 10:49:00 crc kubenswrapper[4698]: I1014 10:49:00.830773 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:00 crc kubenswrapper[4698]: W1014 10:49:00.836132 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod669899ab_8487_429a_9bba_91e525ce832e.slice/crio-b4d8741645d369dbe148359d57f8919df92cab9b0fb3bfeb8b5ea5b37226929b WatchSource:0}: Error finding container b4d8741645d369dbe148359d57f8919df92cab9b0fb3bfeb8b5ea5b37226929b: Status 404 returned error can't find the container with id b4d8741645d369dbe148359d57f8919df92cab9b0fb3bfeb8b5ea5b37226929b Oct 14 10:49:01 crc kubenswrapper[4698]: I1014 10:49:01.372942 4698 generic.go:334] "Generic (PLEG): container finished" podID="669899ab-8487-429a-9bba-91e525ce832e" containerID="396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103" exitCode=0 Oct 14 10:49:01 crc kubenswrapper[4698]: I1014 10:49:01.373076 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerDied","Data":"396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103"} Oct 14 10:49:01 crc kubenswrapper[4698]: I1014 10:49:01.373448 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerStarted","Data":"b4d8741645d369dbe148359d57f8919df92cab9b0fb3bfeb8b5ea5b37226929b"} Oct 14 10:49:01 crc kubenswrapper[4698]: I1014 10:49:01.378881 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerStarted","Data":"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554"} Oct 14 10:49:01 crc kubenswrapper[4698]: I1014 10:49:01.414706 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xkj8d" podStartSLOduration=2.884396556 podStartE2EDuration="6.41469037s" podCreationTimestamp="2025-10-14 10:48:55 +0000 UTC" firstStartedPulling="2025-10-14 10:48:57.319745409 +0000 UTC m=+3119.017044825" lastFinishedPulling="2025-10-14 10:49:00.850039223 +0000 UTC m=+3122.547338639" observedRunningTime="2025-10-14 10:49:01.412185492 +0000 UTC m=+3123.109484918" watchObservedRunningTime="2025-10-14 10:49:01.41469037 +0000 UTC m=+3123.111989786" Oct 14 10:49:03 crc kubenswrapper[4698]: I1014 10:49:03.397506 4698 generic.go:334] "Generic (PLEG): container finished" podID="669899ab-8487-429a-9bba-91e525ce832e" containerID="e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a" exitCode=0 Oct 14 10:49:03 crc kubenswrapper[4698]: I1014 10:49:03.397608 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerDied","Data":"e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a"} Oct 14 10:49:04 crc kubenswrapper[4698]: I1014 10:49:04.408462 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerStarted","Data":"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d"} Oct 14 10:49:04 crc kubenswrapper[4698]: I1014 10:49:04.429743 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5l8db" podStartSLOduration=2.968369498 podStartE2EDuration="5.429724068s" podCreationTimestamp="2025-10-14 10:48:59 +0000 UTC" firstStartedPulling="2025-10-14 10:49:01.374604743 +0000 UTC m=+3123.071904159" lastFinishedPulling="2025-10-14 10:49:03.835959303 +0000 UTC m=+3125.533258729" observedRunningTime="2025-10-14 10:49:04.423099017 +0000 UTC m=+3126.120398473" watchObservedRunningTime="2025-10-14 10:49:04.429724068 +0000 UTC m=+3126.127023484" Oct 14 10:49:06 crc kubenswrapper[4698]: I1014 10:49:06.017399 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:49:06 crc kubenswrapper[4698]: E1014 10:49:06.018210 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:49:06 crc kubenswrapper[4698]: I1014 10:49:06.170974 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:06 crc kubenswrapper[4698]: I1014 10:49:06.173091 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:06 crc kubenswrapper[4698]: I1014 10:49:06.235996 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:06 crc kubenswrapper[4698]: I1014 10:49:06.480338 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:07 crc kubenswrapper[4698]: I1014 10:49:07.796590 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:49:09 crc kubenswrapper[4698]: I1014 10:49:09.461977 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xkj8d" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="registry-server" containerID="cri-o://00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554" gracePeriod=2 Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.207982 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.330197 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.330247 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.374281 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.401031 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content\") pod \"7dc522d4-aacd-436e-95f1-9985817ae097\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.401108 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6mz\" (UniqueName: \"kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz\") pod \"7dc522d4-aacd-436e-95f1-9985817ae097\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.401239 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities\") pod \"7dc522d4-aacd-436e-95f1-9985817ae097\" (UID: \"7dc522d4-aacd-436e-95f1-9985817ae097\") " Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.402577 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities" (OuterVolumeSpecName: "utilities") pod "7dc522d4-aacd-436e-95f1-9985817ae097" (UID: "7dc522d4-aacd-436e-95f1-9985817ae097"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.410839 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz" (OuterVolumeSpecName: "kube-api-access-5n6mz") pod "7dc522d4-aacd-436e-95f1-9985817ae097" (UID: "7dc522d4-aacd-436e-95f1-9985817ae097"). InnerVolumeSpecName "kube-api-access-5n6mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.450386 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dc522d4-aacd-436e-95f1-9985817ae097" (UID: "7dc522d4-aacd-436e-95f1-9985817ae097"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.472746 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkj8d" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.472802 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerDied","Data":"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554"} Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.472841 4698 generic.go:334] "Generic (PLEG): container finished" podID="7dc522d4-aacd-436e-95f1-9985817ae097" containerID="00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554" exitCode=0 Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.472875 4698 scope.go:117] "RemoveContainer" containerID="00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.472919 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkj8d" event={"ID":"7dc522d4-aacd-436e-95f1-9985817ae097","Type":"ContainerDied","Data":"f8a21d5d5bd43164b03339a0dafbfff93de171ee8c76f2384ba08547ee4d1d3b"} Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.504295 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6mz\" (UniqueName: \"kubernetes.io/projected/7dc522d4-aacd-436e-95f1-9985817ae097-kube-api-access-5n6mz\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.504326 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.504335 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc522d4-aacd-436e-95f1-9985817ae097-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.516987 4698 scope.go:117] "RemoveContainer" containerID="bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.519058 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.527812 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.535717 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xkj8d"] Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.540380 4698 scope.go:117] "RemoveContainer" containerID="010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.588538 4698 scope.go:117] "RemoveContainer" containerID="00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554" Oct 14 10:49:10 crc kubenswrapper[4698]: E1014 10:49:10.589498 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554\": container with ID starting with 00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554 not found: ID does not exist" containerID="00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.589529 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554"} err="failed to get container status \"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554\": rpc error: code = NotFound desc = could not find container \"00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554\": container with ID starting with 00dc363dffbbe38867b25809fa5c26e44e12a7193dd9b875857785be06973554 not found: ID does not exist" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.589552 4698 scope.go:117] "RemoveContainer" containerID="bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9" Oct 14 10:49:10 crc kubenswrapper[4698]: E1014 10:49:10.593527 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9\": container with ID starting with bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9 not found: ID does not exist" containerID="bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.593554 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9"} err="failed to get container status \"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9\": rpc error: code = NotFound desc = could not find container \"bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9\": container with ID starting with bb191ead8ae2a727ea3c53fedde194fb1eb4b1c5f411d8423c2dfd47ef678ea9 not found: ID does not exist" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.593571 4698 scope.go:117] "RemoveContainer" containerID="010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f" Oct 14 10:49:10 crc kubenswrapper[4698]: E1014 10:49:10.594055 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f\": container with ID starting with 010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f not found: ID does not exist" containerID="010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f" Oct 14 10:49:10 crc kubenswrapper[4698]: I1014 10:49:10.594078 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f"} err="failed to get container status \"010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f\": rpc error: code = NotFound desc = could not find container \"010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f\": container with ID starting with 010a1666e8e553408515b17971818f84a0cd517067f65e77a8c91552536bc24f not found: ID does not exist" Oct 14 10:49:11 crc kubenswrapper[4698]: I1014 10:49:11.030213 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" path="/var/lib/kubelet/pods/7dc522d4-aacd-436e-95f1-9985817ae097/volumes" Oct 14 10:49:12 crc kubenswrapper[4698]: I1014 10:49:12.593083 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:12 crc kubenswrapper[4698]: I1014 10:49:12.593687 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5l8db" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="registry-server" containerID="cri-o://c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d" gracePeriod=2 Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.285520 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.472144 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities\") pod \"669899ab-8487-429a-9bba-91e525ce832e\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.472236 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content\") pod \"669899ab-8487-429a-9bba-91e525ce832e\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.472374 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-947pv\" (UniqueName: \"kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv\") pod \"669899ab-8487-429a-9bba-91e525ce832e\" (UID: \"669899ab-8487-429a-9bba-91e525ce832e\") " Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.473533 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities" (OuterVolumeSpecName: "utilities") pod "669899ab-8487-429a-9bba-91e525ce832e" (UID: "669899ab-8487-429a-9bba-91e525ce832e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.479456 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv" (OuterVolumeSpecName: "kube-api-access-947pv") pod "669899ab-8487-429a-9bba-91e525ce832e" (UID: "669899ab-8487-429a-9bba-91e525ce832e"). InnerVolumeSpecName "kube-api-access-947pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.493888 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "669899ab-8487-429a-9bba-91e525ce832e" (UID: "669899ab-8487-429a-9bba-91e525ce832e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.505111 4698 generic.go:334] "Generic (PLEG): container finished" podID="669899ab-8487-429a-9bba-91e525ce832e" containerID="c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d" exitCode=0 Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.505184 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l8db" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.505249 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerDied","Data":"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d"} Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.505606 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l8db" event={"ID":"669899ab-8487-429a-9bba-91e525ce832e","Type":"ContainerDied","Data":"b4d8741645d369dbe148359d57f8919df92cab9b0fb3bfeb8b5ea5b37226929b"} Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.505627 4698 scope.go:117] "RemoveContainer" containerID="c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.560226 4698 scope.go:117] "RemoveContainer" containerID="e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.560444 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.569744 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l8db"] Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.575050 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-947pv\" (UniqueName: \"kubernetes.io/projected/669899ab-8487-429a-9bba-91e525ce832e-kube-api-access-947pv\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.575081 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.575096 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669899ab-8487-429a-9bba-91e525ce832e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.583293 4698 scope.go:117] "RemoveContainer" containerID="396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.629722 4698 scope.go:117] "RemoveContainer" containerID="c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d" Oct 14 10:49:13 crc kubenswrapper[4698]: E1014 10:49:13.631160 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d\": container with ID starting with c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d not found: ID does not exist" containerID="c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.631201 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d"} err="failed to get container status \"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d\": rpc error: code = NotFound desc = could not find container \"c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d\": container with ID starting with c47abe679242f57c108f2b5e1cf2545601e5506ed09a5c9ff47a7a9571fa968d not found: ID does not exist" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.631230 4698 scope.go:117] "RemoveContainer" containerID="e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a" Oct 14 10:49:13 crc kubenswrapper[4698]: E1014 10:49:13.631569 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a\": container with ID starting with e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a not found: ID does not exist" containerID="e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.631597 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a"} err="failed to get container status \"e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a\": rpc error: code = NotFound desc = could not find container \"e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a\": container with ID starting with e38fef4f512e655a68fff1e6a13fc222e8c1d913a066b7b24eabc77321b0463a not found: ID does not exist" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.631613 4698 scope.go:117] "RemoveContainer" containerID="396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103" Oct 14 10:49:13 crc kubenswrapper[4698]: E1014 10:49:13.631886 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103\": container with ID starting with 396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103 not found: ID does not exist" containerID="396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103" Oct 14 10:49:13 crc kubenswrapper[4698]: I1014 10:49:13.631908 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103"} err="failed to get container status \"396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103\": rpc error: code = NotFound desc = could not find container \"396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103\": container with ID starting with 396bc060c071320a7b58dd2e16a40c9ba14ed9ec2f14cd274098d82f7399c103 not found: ID does not exist" Oct 14 10:49:15 crc kubenswrapper[4698]: I1014 10:49:15.028205 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669899ab-8487-429a-9bba-91e525ce832e" path="/var/lib/kubelet/pods/669899ab-8487-429a-9bba-91e525ce832e/volumes" Oct 14 10:49:21 crc kubenswrapper[4698]: I1014 10:49:21.017643 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:49:21 crc kubenswrapper[4698]: E1014 10:49:21.018595 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:49:34 crc kubenswrapper[4698]: I1014 10:49:34.017128 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:49:34 crc kubenswrapper[4698]: E1014 10:49:34.018090 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:49:47 crc kubenswrapper[4698]: I1014 10:49:47.017623 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:49:47 crc kubenswrapper[4698]: E1014 10:49:47.018644 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:50:00 crc kubenswrapper[4698]: I1014 10:50:00.017577 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:50:00 crc kubenswrapper[4698]: E1014 10:50:00.018753 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:50:14 crc kubenswrapper[4698]: I1014 10:50:14.017717 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:50:14 crc kubenswrapper[4698]: E1014 10:50:14.019112 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:50:28 crc kubenswrapper[4698]: I1014 10:50:28.017503 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:50:28 crc kubenswrapper[4698]: E1014 10:50:28.018229 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:50:41 crc kubenswrapper[4698]: I1014 10:50:41.017245 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:50:41 crc kubenswrapper[4698]: E1014 10:50:41.023663 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:50:55 crc kubenswrapper[4698]: I1014 10:50:55.017496 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:50:55 crc kubenswrapper[4698]: E1014 10:50:55.018491 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:51:08 crc kubenswrapper[4698]: I1014 10:51:08.017258 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:51:08 crc kubenswrapper[4698]: E1014 10:51:08.018070 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:51:23 crc kubenswrapper[4698]: I1014 10:51:23.019304 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:51:23 crc kubenswrapper[4698]: E1014 10:51:23.020213 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:51:38 crc kubenswrapper[4698]: I1014 10:51:38.017504 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:51:38 crc kubenswrapper[4698]: E1014 10:51:38.018151 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:51:49 crc kubenswrapper[4698]: I1014 10:51:49.026522 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:51:49 crc kubenswrapper[4698]: E1014 10:51:49.027732 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:52:02 crc kubenswrapper[4698]: I1014 10:52:02.017049 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:52:02 crc kubenswrapper[4698]: E1014 10:52:02.018001 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:52:13 crc kubenswrapper[4698]: I1014 10:52:13.021662 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:52:13 crc kubenswrapper[4698]: E1014 10:52:13.022348 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:52:26 crc kubenswrapper[4698]: I1014 10:52:26.017313 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:52:26 crc kubenswrapper[4698]: E1014 10:52:26.018251 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:52:38 crc kubenswrapper[4698]: I1014 10:52:38.017416 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:52:38 crc kubenswrapper[4698]: E1014 10:52:38.018242 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:52:51 crc kubenswrapper[4698]: I1014 10:52:51.018431 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:52:51 crc kubenswrapper[4698]: E1014 10:52:51.022425 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:53:03 crc kubenswrapper[4698]: I1014 10:53:03.016596 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:53:03 crc kubenswrapper[4698]: E1014 10:53:03.017598 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:53:16 crc kubenswrapper[4698]: I1014 10:53:16.018167 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:53:16 crc kubenswrapper[4698]: E1014 10:53:16.019295 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:53:30 crc kubenswrapper[4698]: I1014 10:53:30.017522 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:53:30 crc kubenswrapper[4698]: E1014 10:53:30.018412 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:53:42 crc kubenswrapper[4698]: I1014 10:53:42.017101 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:53:42 crc kubenswrapper[4698]: E1014 10:53:42.017842 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 10:53:54 crc kubenswrapper[4698]: I1014 10:53:54.017368 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:53:55 crc kubenswrapper[4698]: I1014 10:53:55.204751 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3"} Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.405718 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406531 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="extract-content" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406543 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="extract-content" Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406564 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="extract-utilities" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406570 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="extract-utilities" Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406587 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406593 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406616 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="extract-utilities" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406623 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="extract-utilities" Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406634 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406641 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: E1014 10:55:02.406650 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="extract-content" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406655 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="extract-content" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406863 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="669899ab-8487-429a-9bba-91e525ce832e" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.406876 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc522d4-aacd-436e-95f1-9985817ae097" containerName="registry-server" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.408219 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.424922 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.548470 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.548679 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9j2\" (UniqueName: \"kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.548972 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.650702 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.651114 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.651219 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st9j2\" (UniqueName: \"kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.651267 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.651465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.674272 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st9j2\" (UniqueName: \"kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2\") pod \"certified-operators-zklkl\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:02 crc kubenswrapper[4698]: I1014 10:55:02.747202 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:03 crc kubenswrapper[4698]: I1014 10:55:03.381743 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:03 crc kubenswrapper[4698]: I1014 10:55:03.837641 4698 generic.go:334] "Generic (PLEG): container finished" podID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerID="a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8" exitCode=0 Oct 14 10:55:03 crc kubenswrapper[4698]: I1014 10:55:03.838910 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerDied","Data":"a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8"} Oct 14 10:55:03 crc kubenswrapper[4698]: I1014 10:55:03.839199 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerStarted","Data":"24eaea97b7824b6f3577f5d86442ad7ff5b5c1576de4458befe6df945c94b69e"} Oct 14 10:55:03 crc kubenswrapper[4698]: I1014 10:55:03.841828 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:55:05 crc kubenswrapper[4698]: I1014 10:55:05.856940 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerStarted","Data":"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457"} Oct 14 10:55:06 crc kubenswrapper[4698]: I1014 10:55:06.869288 4698 generic.go:334] "Generic (PLEG): container finished" podID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerID="4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457" exitCode=0 Oct 14 10:55:06 crc kubenswrapper[4698]: I1014 10:55:06.870110 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerDied","Data":"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457"} Oct 14 10:55:07 crc kubenswrapper[4698]: I1014 10:55:07.881934 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerStarted","Data":"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac"} Oct 14 10:55:07 crc kubenswrapper[4698]: I1014 10:55:07.905059 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zklkl" podStartSLOduration=2.411979496 podStartE2EDuration="5.905031956s" podCreationTimestamp="2025-10-14 10:55:02 +0000 UTC" firstStartedPulling="2025-10-14 10:55:03.841328694 +0000 UTC m=+3485.538628120" lastFinishedPulling="2025-10-14 10:55:07.334381164 +0000 UTC m=+3489.031680580" observedRunningTime="2025-10-14 10:55:07.897971627 +0000 UTC m=+3489.595271053" watchObservedRunningTime="2025-10-14 10:55:07.905031956 +0000 UTC m=+3489.602331372" Oct 14 10:55:12 crc kubenswrapper[4698]: I1014 10:55:12.748133 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:12 crc kubenswrapper[4698]: I1014 10:55:12.748950 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:12 crc kubenswrapper[4698]: I1014 10:55:12.811334 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:12 crc kubenswrapper[4698]: I1014 10:55:12.977944 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:13 crc kubenswrapper[4698]: I1014 10:55:13.047315 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:14 crc kubenswrapper[4698]: I1014 10:55:14.949501 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zklkl" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="registry-server" containerID="cri-o://3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac" gracePeriod=2 Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.729736 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.840057 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content\") pod \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.840415 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st9j2\" (UniqueName: \"kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2\") pod \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.840506 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities\") pod \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\" (UID: \"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0\") " Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.841037 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities" (OuterVolumeSpecName: "utilities") pod "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" (UID: "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.852069 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2" (OuterVolumeSpecName: "kube-api-access-st9j2") pod "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" (UID: "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0"). InnerVolumeSpecName "kube-api-access-st9j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.888986 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" (UID: "5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.943323 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.943380 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st9j2\" (UniqueName: \"kubernetes.io/projected/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-kube-api-access-st9j2\") on node \"crc\" DevicePath \"\"" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.943399 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.960252 4698 generic.go:334] "Generic (PLEG): container finished" podID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerID="3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac" exitCode=0 Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.960296 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerDied","Data":"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac"} Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.960326 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zklkl" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.960349 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zklkl" event={"ID":"5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0","Type":"ContainerDied","Data":"24eaea97b7824b6f3577f5d86442ad7ff5b5c1576de4458befe6df945c94b69e"} Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.960369 4698 scope.go:117] "RemoveContainer" containerID="3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.991385 4698 scope.go:117] "RemoveContainer" containerID="4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457" Oct 14 10:55:15 crc kubenswrapper[4698]: I1014 10:55:15.999378 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.007491 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zklkl"] Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.016719 4698 scope.go:117] "RemoveContainer" containerID="a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.067701 4698 scope.go:117] "RemoveContainer" containerID="3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac" Oct 14 10:55:16 crc kubenswrapper[4698]: E1014 10:55:16.069262 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac\": container with ID starting with 3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac not found: ID does not exist" containerID="3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.069322 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac"} err="failed to get container status \"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac\": rpc error: code = NotFound desc = could not find container \"3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac\": container with ID starting with 3a88681da2719c641ac5f8f868d5965d59a31c08fbeb5f307a36b0fb04d9d1ac not found: ID does not exist" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.069347 4698 scope.go:117] "RemoveContainer" containerID="4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457" Oct 14 10:55:16 crc kubenswrapper[4698]: E1014 10:55:16.069938 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457\": container with ID starting with 4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457 not found: ID does not exist" containerID="4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.070097 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457"} err="failed to get container status \"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457\": rpc error: code = NotFound desc = could not find container \"4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457\": container with ID starting with 4df497b2f69d7cbe369c1fff83def7dbde453050f9a1253abeb205a3f5efa457 not found: ID does not exist" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.070242 4698 scope.go:117] "RemoveContainer" containerID="a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8" Oct 14 10:55:16 crc kubenswrapper[4698]: E1014 10:55:16.071081 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8\": container with ID starting with a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8 not found: ID does not exist" containerID="a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8" Oct 14 10:55:16 crc kubenswrapper[4698]: I1014 10:55:16.071114 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8"} err="failed to get container status \"a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8\": rpc error: code = NotFound desc = could not find container \"a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8\": container with ID starting with a3ba77975e63773a598a02a554b20d8bee117f5c09323febb8c015995ebdc0b8 not found: ID does not exist" Oct 14 10:55:17 crc kubenswrapper[4698]: I1014 10:55:17.044641 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" path="/var/lib/kubelet/pods/5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0/volumes" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.796041 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:22 crc kubenswrapper[4698]: E1014 10:56:22.796902 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="registry-server" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.796915 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="registry-server" Oct 14 10:56:22 crc kubenswrapper[4698]: E1014 10:56:22.796950 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="extract-utilities" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.796956 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="extract-utilities" Oct 14 10:56:22 crc kubenswrapper[4698]: E1014 10:56:22.796978 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="extract-content" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.796984 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="extract-content" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.797168 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2b4b7d-ad05-465b-8c2e-38e9f4b289d0" containerName="registry-server" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.798624 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.823300 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.965099 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.965198 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8kv\" (UniqueName: \"kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:22 crc kubenswrapper[4698]: I1014 10:56:22.965352 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.067327 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.067404 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8kv\" (UniqueName: \"kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.067493 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.067931 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.068019 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.096218 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8kv\" (UniqueName: \"kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv\") pod \"redhat-operators-9qw45\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.126086 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.694232 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.908366 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:56:23 crc kubenswrapper[4698]: I1014 10:56:23.908758 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:56:24 crc kubenswrapper[4698]: I1014 10:56:24.605954 4698 generic.go:334] "Generic (PLEG): container finished" podID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerID="2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d" exitCode=0 Oct 14 10:56:24 crc kubenswrapper[4698]: I1014 10:56:24.605995 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerDied","Data":"2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d"} Oct 14 10:56:24 crc kubenswrapper[4698]: I1014 10:56:24.606253 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerStarted","Data":"7182717a2b58a0f9801d56dfc5f3b9682d960fb637b9a11ed9661e7d78e33a3d"} Oct 14 10:56:25 crc kubenswrapper[4698]: I1014 10:56:25.616103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerStarted","Data":"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a"} Oct 14 10:56:29 crc kubenswrapper[4698]: I1014 10:56:29.650983 4698 generic.go:334] "Generic (PLEG): container finished" podID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerID="d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a" exitCode=0 Oct 14 10:56:29 crc kubenswrapper[4698]: I1014 10:56:29.651150 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerDied","Data":"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a"} Oct 14 10:56:31 crc kubenswrapper[4698]: I1014 10:56:31.672076 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerStarted","Data":"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf"} Oct 14 10:56:31 crc kubenswrapper[4698]: I1014 10:56:31.697953 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9qw45" podStartSLOduration=3.637294093 podStartE2EDuration="9.69792755s" podCreationTimestamp="2025-10-14 10:56:22 +0000 UTC" firstStartedPulling="2025-10-14 10:56:24.608947551 +0000 UTC m=+3566.306246967" lastFinishedPulling="2025-10-14 10:56:30.669580978 +0000 UTC m=+3572.366880424" observedRunningTime="2025-10-14 10:56:31.694334848 +0000 UTC m=+3573.391634264" watchObservedRunningTime="2025-10-14 10:56:31.69792755 +0000 UTC m=+3573.395226966" Oct 14 10:56:33 crc kubenswrapper[4698]: I1014 10:56:33.126512 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:33 crc kubenswrapper[4698]: I1014 10:56:33.126883 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:34 crc kubenswrapper[4698]: I1014 10:56:34.181022 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9qw45" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="registry-server" probeResult="failure" output=< Oct 14 10:56:34 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:56:34 crc kubenswrapper[4698]: > Oct 14 10:56:43 crc kubenswrapper[4698]: I1014 10:56:43.187202 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:43 crc kubenswrapper[4698]: I1014 10:56:43.254447 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:43 crc kubenswrapper[4698]: I1014 10:56:43.430122 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:44 crc kubenswrapper[4698]: I1014 10:56:44.795023 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9qw45" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="registry-server" containerID="cri-o://05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf" gracePeriod=2 Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.659955 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.804418 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities\") pod \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.805019 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content\") pod \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.805155 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp8kv\" (UniqueName: \"kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv\") pod \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\" (UID: \"e3b8846d-38a4-4a2f-946c-0be9f78aa920\") " Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.814200 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities" (OuterVolumeSpecName: "utilities") pod "e3b8846d-38a4-4a2f-946c-0be9f78aa920" (UID: "e3b8846d-38a4-4a2f-946c-0be9f78aa920"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.818183 4698 generic.go:334] "Generic (PLEG): container finished" podID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerID="05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf" exitCode=0 Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.818246 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerDied","Data":"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf"} Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.818280 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qw45" event={"ID":"e3b8846d-38a4-4a2f-946c-0be9f78aa920","Type":"ContainerDied","Data":"7182717a2b58a0f9801d56dfc5f3b9682d960fb637b9a11ed9661e7d78e33a3d"} Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.818318 4698 scope.go:117] "RemoveContainer" containerID="05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.818570 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qw45" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.833480 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv" (OuterVolumeSpecName: "kube-api-access-sp8kv") pod "e3b8846d-38a4-4a2f-946c-0be9f78aa920" (UID: "e3b8846d-38a4-4a2f-946c-0be9f78aa920"). InnerVolumeSpecName "kube-api-access-sp8kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.883913 4698 scope.go:117] "RemoveContainer" containerID="d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.925347 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp8kv\" (UniqueName: \"kubernetes.io/projected/e3b8846d-38a4-4a2f-946c-0be9f78aa920-kube-api-access-sp8kv\") on node \"crc\" DevicePath \"\"" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.925405 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.956693 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3b8846d-38a4-4a2f-946c-0be9f78aa920" (UID: "e3b8846d-38a4-4a2f-946c-0be9f78aa920"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.958669 4698 scope.go:117] "RemoveContainer" containerID="2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.998491 4698 scope.go:117] "RemoveContainer" containerID="05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf" Oct 14 10:56:45 crc kubenswrapper[4698]: E1014 10:56:45.999127 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf\": container with ID starting with 05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf not found: ID does not exist" containerID="05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.999173 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf"} err="failed to get container status \"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf\": rpc error: code = NotFound desc = could not find container \"05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf\": container with ID starting with 05fa27de31d11b10c7e3e006c8d835959d0b3a3e64831c41500f508cf440c8cf not found: ID does not exist" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.999203 4698 scope.go:117] "RemoveContainer" containerID="d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a" Oct 14 10:56:45 crc kubenswrapper[4698]: E1014 10:56:45.999577 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a\": container with ID starting with d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a not found: ID does not exist" containerID="d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.999598 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a"} err="failed to get container status \"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a\": rpc error: code = NotFound desc = could not find container \"d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a\": container with ID starting with d5902625ab3f2adc9d1532f43288af13e1f62a7de578ca765b39b44c2bb33e3a not found: ID does not exist" Oct 14 10:56:45 crc kubenswrapper[4698]: I1014 10:56:45.999611 4698 scope.go:117] "RemoveContainer" containerID="2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d" Oct 14 10:56:46 crc kubenswrapper[4698]: E1014 10:56:46.001139 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d\": container with ID starting with 2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d not found: ID does not exist" containerID="2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d" Oct 14 10:56:46 crc kubenswrapper[4698]: I1014 10:56:46.001161 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d"} err="failed to get container status \"2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d\": rpc error: code = NotFound desc = could not find container \"2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d\": container with ID starting with 2f151da2f78279a99a6008261566cce3c7a7c03b37db43163a8374adeee6ac7d not found: ID does not exist" Oct 14 10:56:46 crc kubenswrapper[4698]: I1014 10:56:46.027287 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b8846d-38a4-4a2f-946c-0be9f78aa920-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:56:46 crc kubenswrapper[4698]: I1014 10:56:46.151563 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:46 crc kubenswrapper[4698]: I1014 10:56:46.160377 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9qw45"] Oct 14 10:56:47 crc kubenswrapper[4698]: I1014 10:56:47.027634 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" path="/var/lib/kubelet/pods/e3b8846d-38a4-4a2f-946c-0be9f78aa920/volumes" Oct 14 10:56:53 crc kubenswrapper[4698]: I1014 10:56:53.908144 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:56:53 crc kubenswrapper[4698]: I1014 10:56:53.908899 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:57:23 crc kubenswrapper[4698]: I1014 10:57:23.908303 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:57:23 crc kubenswrapper[4698]: I1014 10:57:23.908941 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:57:23 crc kubenswrapper[4698]: I1014 10:57:23.908992 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 10:57:23 crc kubenswrapper[4698]: I1014 10:57:23.909850 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:57:23 crc kubenswrapper[4698]: I1014 10:57:23.909914 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3" gracePeriod=600 Oct 14 10:57:24 crc kubenswrapper[4698]: I1014 10:57:24.162061 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3" exitCode=0 Oct 14 10:57:24 crc kubenswrapper[4698]: I1014 10:57:24.162177 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3"} Oct 14 10:57:24 crc kubenswrapper[4698]: I1014 10:57:24.162429 4698 scope.go:117] "RemoveContainer" containerID="8a09d38c6e7b7c3a7641f91ee186188e4b178c8f5f54d6a3e46b0f3211abaf7c" Oct 14 10:57:25 crc kubenswrapper[4698]: I1014 10:57:25.178521 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402"} Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.277986 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 10:59:17 crc kubenswrapper[4698]: E1014 10:59:17.279191 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="extract-content" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.279212 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="extract-content" Oct 14 10:59:17 crc kubenswrapper[4698]: E1014 10:59:17.279225 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="registry-server" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.279232 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="registry-server" Oct 14 10:59:17 crc kubenswrapper[4698]: E1014 10:59:17.279271 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="extract-utilities" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.279280 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="extract-utilities" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.279557 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8846d-38a4-4a2f-946c-0be9f78aa920" containerName="registry-server" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.281526 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.297938 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.390748 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqt7\" (UniqueName: \"kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.390922 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.390952 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.492515 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.492569 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.492707 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqt7\" (UniqueName: \"kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.493202 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.493465 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.514835 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqt7\" (UniqueName: \"kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7\") pod \"community-operators-msh59\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:17 crc kubenswrapper[4698]: I1014 10:59:17.614118 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:19 crc kubenswrapper[4698]: I1014 10:59:19.043754 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 10:59:19 crc kubenswrapper[4698]: W1014 10:59:19.053089 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda26b0b3d_11ad_4b4d_936f_04ced6adb2fa.slice/crio-3eb2dd9ced02850e6603dffb978ca52a250538517435a83214dc65e89380e0b6 WatchSource:0}: Error finding container 3eb2dd9ced02850e6603dffb978ca52a250538517435a83214dc65e89380e0b6: Status 404 returned error can't find the container with id 3eb2dd9ced02850e6603dffb978ca52a250538517435a83214dc65e89380e0b6 Oct 14 10:59:19 crc kubenswrapper[4698]: I1014 10:59:19.194983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerStarted","Data":"3eb2dd9ced02850e6603dffb978ca52a250538517435a83214dc65e89380e0b6"} Oct 14 10:59:20 crc kubenswrapper[4698]: I1014 10:59:20.219561 4698 generic.go:334] "Generic (PLEG): container finished" podID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerID="7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7" exitCode=0 Oct 14 10:59:20 crc kubenswrapper[4698]: I1014 10:59:20.219626 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerDied","Data":"7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7"} Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.657420 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.659939 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.677427 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.679827 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.679890 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwt84\" (UniqueName: \"kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.680035 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.782283 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.782609 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwt84\" (UniqueName: \"kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.782753 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.782787 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.783374 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.806517 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwt84\" (UniqueName: \"kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84\") pod \"redhat-marketplace-zt7tt\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:21 crc kubenswrapper[4698]: I1014 10:59:21.979812 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:22 crc kubenswrapper[4698]: I1014 10:59:22.242956 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerStarted","Data":"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3"} Oct 14 10:59:22 crc kubenswrapper[4698]: I1014 10:59:22.532985 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:22 crc kubenswrapper[4698]: W1014 10:59:22.545060 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5e6d909_fd9c_46f1_b222_f13bbc803f01.slice/crio-9fc8314266b5d7ad2c58a018c23a2d9320bd3aa9d9af97c7e1a40404f09c9711 WatchSource:0}: Error finding container 9fc8314266b5d7ad2c58a018c23a2d9320bd3aa9d9af97c7e1a40404f09c9711: Status 404 returned error can't find the container with id 9fc8314266b5d7ad2c58a018c23a2d9320bd3aa9d9af97c7e1a40404f09c9711 Oct 14 10:59:23 crc kubenswrapper[4698]: I1014 10:59:23.253740 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerID="24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4" exitCode=0 Oct 14 10:59:23 crc kubenswrapper[4698]: I1014 10:59:23.253852 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerDied","Data":"24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4"} Oct 14 10:59:23 crc kubenswrapper[4698]: I1014 10:59:23.255816 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerStarted","Data":"9fc8314266b5d7ad2c58a018c23a2d9320bd3aa9d9af97c7e1a40404f09c9711"} Oct 14 10:59:25 crc kubenswrapper[4698]: I1014 10:59:25.275916 4698 generic.go:334] "Generic (PLEG): container finished" podID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerID="cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3" exitCode=0 Oct 14 10:59:25 crc kubenswrapper[4698]: I1014 10:59:25.275983 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerDied","Data":"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3"} Oct 14 10:59:25 crc kubenswrapper[4698]: I1014 10:59:25.279556 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerID="8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192" exitCode=0 Oct 14 10:59:25 crc kubenswrapper[4698]: I1014 10:59:25.279597 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerDied","Data":"8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192"} Oct 14 10:59:26 crc kubenswrapper[4698]: I1014 10:59:26.305494 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerStarted","Data":"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103"} Oct 14 10:59:27 crc kubenswrapper[4698]: I1014 10:59:27.317087 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerStarted","Data":"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03"} Oct 14 10:59:27 crc kubenswrapper[4698]: I1014 10:59:27.344668 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zt7tt" podStartSLOduration=3.434418082 podStartE2EDuration="6.344647157s" podCreationTimestamp="2025-10-14 10:59:21 +0000 UTC" firstStartedPulling="2025-10-14 10:59:23.257193154 +0000 UTC m=+3744.954492570" lastFinishedPulling="2025-10-14 10:59:26.167422229 +0000 UTC m=+3747.864721645" observedRunningTime="2025-10-14 10:59:27.344345448 +0000 UTC m=+3749.041644894" watchObservedRunningTime="2025-10-14 10:59:27.344647157 +0000 UTC m=+3749.041946573" Oct 14 10:59:27 crc kubenswrapper[4698]: I1014 10:59:27.347059 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-msh59" podStartSLOduration=4.828449695 podStartE2EDuration="10.347049645s" podCreationTimestamp="2025-10-14 10:59:17 +0000 UTC" firstStartedPulling="2025-10-14 10:59:20.223143064 +0000 UTC m=+3741.920442480" lastFinishedPulling="2025-10-14 10:59:25.741743004 +0000 UTC m=+3747.439042430" observedRunningTime="2025-10-14 10:59:26.329956962 +0000 UTC m=+3748.027256398" watchObservedRunningTime="2025-10-14 10:59:27.347049645 +0000 UTC m=+3749.044349061" Oct 14 10:59:27 crc kubenswrapper[4698]: I1014 10:59:27.614732 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:27 crc kubenswrapper[4698]: I1014 10:59:27.615142 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:28 crc kubenswrapper[4698]: I1014 10:59:28.685091 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-msh59" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" probeResult="failure" output=< Oct 14 10:59:28 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:59:28 crc kubenswrapper[4698]: > Oct 14 10:59:31 crc kubenswrapper[4698]: I1014 10:59:31.981014 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:31 crc kubenswrapper[4698]: I1014 10:59:31.981533 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:33 crc kubenswrapper[4698]: I1014 10:59:33.041678 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zt7tt" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="registry-server" probeResult="failure" output=< Oct 14 10:59:33 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:59:33 crc kubenswrapper[4698]: > Oct 14 10:59:38 crc kubenswrapper[4698]: I1014 10:59:38.662355 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-msh59" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" probeResult="failure" output=< Oct 14 10:59:38 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:59:38 crc kubenswrapper[4698]: > Oct 14 10:59:42 crc kubenswrapper[4698]: I1014 10:59:42.222183 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:42 crc kubenswrapper[4698]: I1014 10:59:42.281697 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:42 crc kubenswrapper[4698]: I1014 10:59:42.471310 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:43 crc kubenswrapper[4698]: I1014 10:59:43.470749 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zt7tt" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="registry-server" containerID="cri-o://666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03" gracePeriod=2 Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.399553 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.482618 4698 generic.go:334] "Generic (PLEG): container finished" podID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerID="666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03" exitCode=0 Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.482746 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt7tt" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.482745 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerDied","Data":"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03"} Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.483080 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt7tt" event={"ID":"b5e6d909-fd9c-46f1-b222-f13bbc803f01","Type":"ContainerDied","Data":"9fc8314266b5d7ad2c58a018c23a2d9320bd3aa9d9af97c7e1a40404f09c9711"} Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.483128 4698 scope.go:117] "RemoveContainer" containerID="666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.512969 4698 scope.go:117] "RemoveContainer" containerID="8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.536292 4698 scope.go:117] "RemoveContainer" containerID="24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.548878 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwt84\" (UniqueName: \"kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84\") pod \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.549173 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content\") pod \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.549203 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities\") pod \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\" (UID: \"b5e6d909-fd9c-46f1-b222-f13bbc803f01\") " Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.550479 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities" (OuterVolumeSpecName: "utilities") pod "b5e6d909-fd9c-46f1-b222-f13bbc803f01" (UID: "b5e6d909-fd9c-46f1-b222-f13bbc803f01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.571370 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84" (OuterVolumeSpecName: "kube-api-access-mwt84") pod "b5e6d909-fd9c-46f1-b222-f13bbc803f01" (UID: "b5e6d909-fd9c-46f1-b222-f13bbc803f01"). InnerVolumeSpecName "kube-api-access-mwt84". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.571712 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5e6d909-fd9c-46f1-b222-f13bbc803f01" (UID: "b5e6d909-fd9c-46f1-b222-f13bbc803f01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.583889 4698 scope.go:117] "RemoveContainer" containerID="666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03" Oct 14 10:59:44 crc kubenswrapper[4698]: E1014 10:59:44.584280 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03\": container with ID starting with 666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03 not found: ID does not exist" containerID="666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.584314 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03"} err="failed to get container status \"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03\": rpc error: code = NotFound desc = could not find container \"666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03\": container with ID starting with 666bdd7c2bad662072cd9038fc613a4cb8f48c2b3a91c267fbd1c1fa276d6a03 not found: ID does not exist" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.584440 4698 scope.go:117] "RemoveContainer" containerID="8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192" Oct 14 10:59:44 crc kubenswrapper[4698]: E1014 10:59:44.585022 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192\": container with ID starting with 8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192 not found: ID does not exist" containerID="8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.585079 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192"} err="failed to get container status \"8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192\": rpc error: code = NotFound desc = could not find container \"8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192\": container with ID starting with 8e0087d263a99d603e770e2ac7d94eadd35a26386333b9ed906d9ff96ad8e192 not found: ID does not exist" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.585123 4698 scope.go:117] "RemoveContainer" containerID="24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4" Oct 14 10:59:44 crc kubenswrapper[4698]: E1014 10:59:44.585403 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4\": container with ID starting with 24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4 not found: ID does not exist" containerID="24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.585434 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4"} err="failed to get container status \"24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4\": rpc error: code = NotFound desc = could not find container \"24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4\": container with ID starting with 24c03458821a219c0c964808b348cc68fcc5743b1f22760e1923a100467bf3f4 not found: ID does not exist" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.651521 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.651566 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e6d909-fd9c-46f1-b222-f13bbc803f01-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.651580 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwt84\" (UniqueName: \"kubernetes.io/projected/b5e6d909-fd9c-46f1-b222-f13bbc803f01-kube-api-access-mwt84\") on node \"crc\" DevicePath \"\"" Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.823660 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:44 crc kubenswrapper[4698]: I1014 10:59:44.831469 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt7tt"] Oct 14 10:59:45 crc kubenswrapper[4698]: I1014 10:59:45.036448 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" path="/var/lib/kubelet/pods/b5e6d909-fd9c-46f1-b222-f13bbc803f01/volumes" Oct 14 10:59:48 crc kubenswrapper[4698]: I1014 10:59:48.667641 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-msh59" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" probeResult="failure" output=< Oct 14 10:59:48 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 10:59:48 crc kubenswrapper[4698]: > Oct 14 10:59:53 crc kubenswrapper[4698]: I1014 10:59:53.907949 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:59:53 crc kubenswrapper[4698]: I1014 10:59:53.908344 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:59:57 crc kubenswrapper[4698]: I1014 10:59:57.669642 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:57 crc kubenswrapper[4698]: I1014 10:59:57.728918 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-msh59" Oct 14 10:59:57 crc kubenswrapper[4698]: I1014 10:59:57.910611 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 10:59:59 crc kubenswrapper[4698]: I1014 10:59:59.641431 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-msh59" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" containerID="cri-o://34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103" gracePeriod=2 Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.202251 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch"] Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.203791 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="extract-content" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.203823 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="extract-content" Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.203874 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="extract-utilities" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.203885 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="extract-utilities" Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.203924 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="registry-server" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.203936 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="registry-server" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.204945 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e6d909-fd9c-46f1-b222-f13bbc803f01" containerName="registry-server" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.206185 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.212805 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.213949 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.238021 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch"] Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.384017 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msh59" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.403959 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.404093 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww56h\" (UniqueName: \"kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.404177 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.505387 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content\") pod \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510072 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities\") pod \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510217 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqt7\" (UniqueName: \"kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7\") pod \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\" (UID: \"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa\") " Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510667 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510813 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww56h\" (UniqueName: \"kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510937 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.510971 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities" (OuterVolumeSpecName: "utilities") pod "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" (UID: "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.511199 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.511786 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.517011 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7" (OuterVolumeSpecName: "kube-api-access-nnqt7") pod "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" (UID: "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa"). InnerVolumeSpecName "kube-api-access-nnqt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.523754 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.528383 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww56h\" (UniqueName: \"kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h\") pod \"collect-profiles-29340660-7ckch\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.538483 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.567853 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" (UID: "a26b0b3d-11ad-4b4d-936f-04ced6adb2fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.613376 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnqt7\" (UniqueName: \"kubernetes.io/projected/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-kube-api-access-nnqt7\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.613971 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.657859 4698 generic.go:334] "Generic (PLEG): container finished" podID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerID="34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103" exitCode=0 Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.657922 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerDied","Data":"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103"} Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.657957 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msh59" event={"ID":"a26b0b3d-11ad-4b4d-936f-04ced6adb2fa","Type":"ContainerDied","Data":"3eb2dd9ced02850e6603dffb978ca52a250538517435a83214dc65e89380e0b6"} Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.657978 4698 scope.go:117] "RemoveContainer" containerID="34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.658194 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msh59" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.695716 4698 scope.go:117] "RemoveContainer" containerID="cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.697695 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.707461 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-msh59"] Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.739372 4698 scope.go:117] "RemoveContainer" containerID="7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.775230 4698 scope.go:117] "RemoveContainer" containerID="34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103" Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.779234 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103\": container with ID starting with 34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103 not found: ID does not exist" containerID="34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.779274 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103"} err="failed to get container status \"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103\": rpc error: code = NotFound desc = could not find container \"34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103\": container with ID starting with 34ea094e38f58d65086f09b98b16fbb6cd4e8e7c55cdcd0b775ed02610146103 not found: ID does not exist" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.779305 4698 scope.go:117] "RemoveContainer" containerID="cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3" Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.779537 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3\": container with ID starting with cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3 not found: ID does not exist" containerID="cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.779556 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3"} err="failed to get container status \"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3\": rpc error: code = NotFound desc = could not find container \"cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3\": container with ID starting with cde56d50cd2e74608be4cfa3670ab1ab8af3a8a779bcc49ffbbf0dfc5bafe1e3 not found: ID does not exist" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.779568 4698 scope.go:117] "RemoveContainer" containerID="7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7" Oct 14 11:00:00 crc kubenswrapper[4698]: E1014 11:00:00.781715 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7\": container with ID starting with 7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7 not found: ID does not exist" containerID="7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7" Oct 14 11:00:00 crc kubenswrapper[4698]: I1014 11:00:00.781746 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7"} err="failed to get container status \"7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7\": rpc error: code = NotFound desc = could not find container \"7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7\": container with ID starting with 7a510c3d6581d7d61e1a2e62c432475bc24a34683567dc33de218463bfeb03f7 not found: ID does not exist" Oct 14 11:00:01 crc kubenswrapper[4698]: I1014 11:00:01.032159 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" path="/var/lib/kubelet/pods/a26b0b3d-11ad-4b4d-936f-04ced6adb2fa/volumes" Oct 14 11:00:01 crc kubenswrapper[4698]: I1014 11:00:01.033577 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch"] Oct 14 11:00:01 crc kubenswrapper[4698]: W1014 11:00:01.038150 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3b1e42_66bf_4c84_9be4_42e353c0a204.slice/crio-c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e WatchSource:0}: Error finding container c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e: Status 404 returned error can't find the container with id c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e Oct 14 11:00:01 crc kubenswrapper[4698]: I1014 11:00:01.670301 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" event={"ID":"9c3b1e42-66bf-4c84-9be4-42e353c0a204","Type":"ContainerStarted","Data":"b56b8c67adaa3ee5119ddb35fcb8d803ad746ccdf0c687c8449e9f96139bb953"} Oct 14 11:00:01 crc kubenswrapper[4698]: I1014 11:00:01.670655 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" event={"ID":"9c3b1e42-66bf-4c84-9be4-42e353c0a204","Type":"ContainerStarted","Data":"c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e"} Oct 14 11:00:01 crc kubenswrapper[4698]: I1014 11:00:01.695917 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" podStartSLOduration=1.695890136 podStartE2EDuration="1.695890136s" podCreationTimestamp="2025-10-14 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 11:00:01.686554751 +0000 UTC m=+3783.383854177" watchObservedRunningTime="2025-10-14 11:00:01.695890136 +0000 UTC m=+3783.393189552" Oct 14 11:00:02 crc kubenswrapper[4698]: I1014 11:00:02.680321 4698 generic.go:334] "Generic (PLEG): container finished" podID="9c3b1e42-66bf-4c84-9be4-42e353c0a204" containerID="b56b8c67adaa3ee5119ddb35fcb8d803ad746ccdf0c687c8449e9f96139bb953" exitCode=0 Oct 14 11:00:02 crc kubenswrapper[4698]: I1014 11:00:02.680422 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" event={"ID":"9c3b1e42-66bf-4c84-9be4-42e353c0a204","Type":"ContainerDied","Data":"b56b8c67adaa3ee5119ddb35fcb8d803ad746ccdf0c687c8449e9f96139bb953"} Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.164322 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.289445 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume\") pod \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.289695 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume\") pod \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.289744 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww56h\" (UniqueName: \"kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h\") pod \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\" (UID: \"9c3b1e42-66bf-4c84-9be4-42e353c0a204\") " Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.291184 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c3b1e42-66bf-4c84-9be4-42e353c0a204" (UID: "9c3b1e42-66bf-4c84-9be4-42e353c0a204"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.296868 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c3b1e42-66bf-4c84-9be4-42e353c0a204" (UID: "9c3b1e42-66bf-4c84-9be4-42e353c0a204"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.296932 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h" (OuterVolumeSpecName: "kube-api-access-ww56h") pod "9c3b1e42-66bf-4c84-9be4-42e353c0a204" (UID: "9c3b1e42-66bf-4c84-9be4-42e353c0a204"). InnerVolumeSpecName "kube-api-access-ww56h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.392274 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3b1e42-66bf-4c84-9be4-42e353c0a204-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.392318 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww56h\" (UniqueName: \"kubernetes.io/projected/9c3b1e42-66bf-4c84-9be4-42e353c0a204-kube-api-access-ww56h\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.392333 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c3b1e42-66bf-4c84-9be4-42e353c0a204-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.698506 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" event={"ID":"9c3b1e42-66bf-4c84-9be4-42e353c0a204","Type":"ContainerDied","Data":"c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e"} Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.698554 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5955dc8999994ecb7869ddf0489e52327fc97441afd41ba672be21d373c048e" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.698584 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340660-7ckch" Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.767825 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst"] Oct 14 11:00:04 crc kubenswrapper[4698]: I1014 11:00:04.777428 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-82wst"] Oct 14 11:00:05 crc kubenswrapper[4698]: I1014 11:00:05.057691 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dacf27c8-3dc7-4f98-ac16-80138c8dbbac" path="/var/lib/kubelet/pods/dacf27c8-3dc7-4f98-ac16-80138c8dbbac/volumes" Oct 14 11:00:12 crc kubenswrapper[4698]: I1014 11:00:12.016409 4698 scope.go:117] "RemoveContainer" containerID="3d7fccb445739e35282b73006542b2a768ec71ad150eafa560f6d34eed0aa4f6" Oct 14 11:00:23 crc kubenswrapper[4698]: I1014 11:00:23.907822 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:00:23 crc kubenswrapper[4698]: I1014 11:00:23.908284 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:00:53 crc kubenswrapper[4698]: I1014 11:00:53.908550 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:00:53 crc kubenswrapper[4698]: I1014 11:00:53.909065 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:00:53 crc kubenswrapper[4698]: I1014 11:00:53.909108 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 11:00:53 crc kubenswrapper[4698]: I1014 11:00:53.909870 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 11:00:53 crc kubenswrapper[4698]: I1014 11:00:53.909914 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" gracePeriod=600 Oct 14 11:00:54 crc kubenswrapper[4698]: E1014 11:00:54.035401 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:00:54 crc kubenswrapper[4698]: I1014 11:00:54.176865 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" exitCode=0 Oct 14 11:00:54 crc kubenswrapper[4698]: I1014 11:00:54.176950 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402"} Oct 14 11:00:54 crc kubenswrapper[4698]: I1014 11:00:54.177276 4698 scope.go:117] "RemoveContainer" containerID="cdafb9b3a64aacdc8bc28b7e65cb92eddb2a3810b308ed08790b175d19fe2bf3" Oct 14 11:00:54 crc kubenswrapper[4698]: I1014 11:00:54.178516 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:00:54 crc kubenswrapper[4698]: E1014 11:00:54.179305 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.149682 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340661-dn2xz"] Oct 14 11:01:00 crc kubenswrapper[4698]: E1014 11:01:00.150580 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="extract-content" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150593 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="extract-content" Oct 14 11:01:00 crc kubenswrapper[4698]: E1014 11:01:00.150616 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="extract-utilities" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150622 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="extract-utilities" Oct 14 11:01:00 crc kubenswrapper[4698]: E1014 11:01:00.150649 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150655 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" Oct 14 11:01:00 crc kubenswrapper[4698]: E1014 11:01:00.150672 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3b1e42-66bf-4c84-9be4-42e353c0a204" containerName="collect-profiles" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150677 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3b1e42-66bf-4c84-9be4-42e353c0a204" containerName="collect-profiles" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150880 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26b0b3d-11ad-4b4d-936f-04ced6adb2fa" containerName="registry-server" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.150890 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3b1e42-66bf-4c84-9be4-42e353c0a204" containerName="collect-profiles" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.151534 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.168999 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340661-dn2xz"] Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.281602 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.281695 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.281720 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9zg\" (UniqueName: \"kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.282213 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.384448 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.385387 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.385465 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.385492 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9zg\" (UniqueName: \"kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.390938 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.393818 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.400063 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.401287 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9zg\" (UniqueName: \"kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg\") pod \"keystone-cron-29340661-dn2xz\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:00 crc kubenswrapper[4698]: I1014 11:01:00.529321 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:01 crc kubenswrapper[4698]: I1014 11:01:01.057673 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340661-dn2xz"] Oct 14 11:01:01 crc kubenswrapper[4698]: I1014 11:01:01.252917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340661-dn2xz" event={"ID":"9f7aded7-281a-4d4b-ab0d-7e52eda65441","Type":"ContainerStarted","Data":"01daf020ef798a72041b0c5d77deaddd4dd5ab544911c979c0259f7b31ded44e"} Oct 14 11:01:02 crc kubenswrapper[4698]: I1014 11:01:02.263022 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340661-dn2xz" event={"ID":"9f7aded7-281a-4d4b-ab0d-7e52eda65441","Type":"ContainerStarted","Data":"c9cf7b60d2ac3e97782ce2801624f5828930f1e7c73b83e3ac592fef0eae184a"} Oct 14 11:01:02 crc kubenswrapper[4698]: I1014 11:01:02.279056 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340661-dn2xz" podStartSLOduration=2.279038486 podStartE2EDuration="2.279038486s" podCreationTimestamp="2025-10-14 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 11:01:02.27742375 +0000 UTC m=+3843.974723196" watchObservedRunningTime="2025-10-14 11:01:02.279038486 +0000 UTC m=+3843.976337902" Oct 14 11:01:06 crc kubenswrapper[4698]: I1014 11:01:06.309423 4698 generic.go:334] "Generic (PLEG): container finished" podID="9f7aded7-281a-4d4b-ab0d-7e52eda65441" containerID="c9cf7b60d2ac3e97782ce2801624f5828930f1e7c73b83e3ac592fef0eae184a" exitCode=0 Oct 14 11:01:06 crc kubenswrapper[4698]: I1014 11:01:06.309660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340661-dn2xz" event={"ID":"9f7aded7-281a-4d4b-ab0d-7e52eda65441","Type":"ContainerDied","Data":"c9cf7b60d2ac3e97782ce2801624f5828930f1e7c73b83e3ac592fef0eae184a"} Oct 14 11:01:07 crc kubenswrapper[4698]: I1014 11:01:07.906605 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.034746 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle\") pod \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.034928 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys\") pod \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.035000 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data\") pod \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.035037 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9zg\" (UniqueName: \"kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg\") pod \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\" (UID: \"9f7aded7-281a-4d4b-ab0d-7e52eda65441\") " Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.045503 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9f7aded7-281a-4d4b-ab0d-7e52eda65441" (UID: "9f7aded7-281a-4d4b-ab0d-7e52eda65441"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.054339 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg" (OuterVolumeSpecName: "kube-api-access-pb9zg") pod "9f7aded7-281a-4d4b-ab0d-7e52eda65441" (UID: "9f7aded7-281a-4d4b-ab0d-7e52eda65441"). InnerVolumeSpecName "kube-api-access-pb9zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.104977 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7aded7-281a-4d4b-ab0d-7e52eda65441" (UID: "9f7aded7-281a-4d4b-ab0d-7e52eda65441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.138526 4698 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.138604 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9zg\" (UniqueName: \"kubernetes.io/projected/9f7aded7-281a-4d4b-ab0d-7e52eda65441-kube-api-access-pb9zg\") on node \"crc\" DevicePath \"\"" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.138618 4698 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.143800 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data" (OuterVolumeSpecName: "config-data") pod "9f7aded7-281a-4d4b-ab0d-7e52eda65441" (UID: "9f7aded7-281a-4d4b-ab0d-7e52eda65441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.240844 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7aded7-281a-4d4b-ab0d-7e52eda65441-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.331316 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340661-dn2xz" event={"ID":"9f7aded7-281a-4d4b-ab0d-7e52eda65441","Type":"ContainerDied","Data":"01daf020ef798a72041b0c5d77deaddd4dd5ab544911c979c0259f7b31ded44e"} Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.331373 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01daf020ef798a72041b0c5d77deaddd4dd5ab544911c979c0259f7b31ded44e" Oct 14 11:01:08 crc kubenswrapper[4698]: I1014 11:01:08.331383 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340661-dn2xz" Oct 14 11:01:10 crc kubenswrapper[4698]: I1014 11:01:10.018315 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:01:10 crc kubenswrapper[4698]: E1014 11:01:10.019110 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:01:21 crc kubenswrapper[4698]: I1014 11:01:21.018178 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:01:21 crc kubenswrapper[4698]: E1014 11:01:21.019141 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:01:33 crc kubenswrapper[4698]: I1014 11:01:33.022318 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:01:33 crc kubenswrapper[4698]: E1014 11:01:33.024009 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:01:46 crc kubenswrapper[4698]: I1014 11:01:46.017931 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:01:46 crc kubenswrapper[4698]: E1014 11:01:46.019072 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:01:58 crc kubenswrapper[4698]: I1014 11:01:58.017532 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:01:58 crc kubenswrapper[4698]: E1014 11:01:58.020505 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:02:10 crc kubenswrapper[4698]: I1014 11:02:10.016866 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:02:10 crc kubenswrapper[4698]: E1014 11:02:10.017623 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:02:23 crc kubenswrapper[4698]: I1014 11:02:23.017546 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:02:23 crc kubenswrapper[4698]: E1014 11:02:23.018544 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:02:38 crc kubenswrapper[4698]: I1014 11:02:38.018197 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:02:38 crc kubenswrapper[4698]: E1014 11:02:38.019120 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:02:50 crc kubenswrapper[4698]: I1014 11:02:50.017592 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:02:50 crc kubenswrapper[4698]: E1014 11:02:50.020129 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:03:02 crc kubenswrapper[4698]: I1014 11:03:02.017675 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:03:02 crc kubenswrapper[4698]: E1014 11:03:02.018556 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:03:14 crc kubenswrapper[4698]: I1014 11:03:14.017088 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:03:14 crc kubenswrapper[4698]: E1014 11:03:14.017813 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:03:26 crc kubenswrapper[4698]: I1014 11:03:26.017759 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:03:26 crc kubenswrapper[4698]: E1014 11:03:26.018515 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:03:41 crc kubenswrapper[4698]: I1014 11:03:41.017136 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:03:41 crc kubenswrapper[4698]: E1014 11:03:41.017898 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:03:55 crc kubenswrapper[4698]: I1014 11:03:55.017587 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:03:55 crc kubenswrapper[4698]: E1014 11:03:55.018637 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:04:06 crc kubenswrapper[4698]: I1014 11:04:06.017481 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:04:06 crc kubenswrapper[4698]: E1014 11:04:06.018349 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:04:17 crc kubenswrapper[4698]: I1014 11:04:17.018043 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:04:17 crc kubenswrapper[4698]: E1014 11:04:17.019086 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:04:29 crc kubenswrapper[4698]: I1014 11:04:29.040632 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:04:29 crc kubenswrapper[4698]: E1014 11:04:29.042060 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:04:43 crc kubenswrapper[4698]: I1014 11:04:43.017301 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:04:43 crc kubenswrapper[4698]: E1014 11:04:43.017935 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:04:58 crc kubenswrapper[4698]: I1014 11:04:58.020241 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:04:58 crc kubenswrapper[4698]: E1014 11:04:58.022694 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:05:10 crc kubenswrapper[4698]: I1014 11:05:10.017535 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:05:10 crc kubenswrapper[4698]: E1014 11:05:10.018467 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:05:21 crc kubenswrapper[4698]: I1014 11:05:21.017556 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:05:21 crc kubenswrapper[4698]: E1014 11:05:21.018499 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.555352 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:25 crc kubenswrapper[4698]: E1014 11:05:25.556288 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7aded7-281a-4d4b-ab0d-7e52eda65441" containerName="keystone-cron" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.556301 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7aded7-281a-4d4b-ab0d-7e52eda65441" containerName="keystone-cron" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.556488 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7aded7-281a-4d4b-ab0d-7e52eda65441" containerName="keystone-cron" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.558093 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.650367 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb6xl\" (UniqueName: \"kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.650497 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.650614 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.662591 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.753202 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.753381 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb6xl\" (UniqueName: \"kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.753460 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.754056 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.754539 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.775856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb6xl\" (UniqueName: \"kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl\") pod \"certified-operators-8lhfm\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:25 crc kubenswrapper[4698]: I1014 11:05:25.880692 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:26 crc kubenswrapper[4698]: I1014 11:05:26.793397 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:26 crc kubenswrapper[4698]: I1014 11:05:26.842135 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerStarted","Data":"8122cea847bed6e1d8a4a3f312c5a6ac15c123209ce7ca4a12ea628b4efbdaa5"} Oct 14 11:05:27 crc kubenswrapper[4698]: I1014 11:05:27.852450 4698 generic.go:334] "Generic (PLEG): container finished" podID="40166da1-a934-4d82-a348-6410685a91f6" containerID="3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620" exitCode=0 Oct 14 11:05:27 crc kubenswrapper[4698]: I1014 11:05:27.852548 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerDied","Data":"3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620"} Oct 14 11:05:27 crc kubenswrapper[4698]: I1014 11:05:27.854879 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 11:05:29 crc kubenswrapper[4698]: I1014 11:05:29.878093 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerStarted","Data":"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8"} Oct 14 11:05:30 crc kubenswrapper[4698]: I1014 11:05:30.888141 4698 generic.go:334] "Generic (PLEG): container finished" podID="40166da1-a934-4d82-a348-6410685a91f6" containerID="239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8" exitCode=0 Oct 14 11:05:30 crc kubenswrapper[4698]: I1014 11:05:30.888242 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerDied","Data":"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8"} Oct 14 11:05:33 crc kubenswrapper[4698]: I1014 11:05:33.919065 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerStarted","Data":"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be"} Oct 14 11:05:33 crc kubenswrapper[4698]: I1014 11:05:33.945400 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lhfm" podStartSLOduration=3.558909784 podStartE2EDuration="8.945371686s" podCreationTimestamp="2025-10-14 11:05:25 +0000 UTC" firstStartedPulling="2025-10-14 11:05:27.854629303 +0000 UTC m=+4109.551928719" lastFinishedPulling="2025-10-14 11:05:33.241091205 +0000 UTC m=+4114.938390621" observedRunningTime="2025-10-14 11:05:33.934620526 +0000 UTC m=+4115.631919962" watchObservedRunningTime="2025-10-14 11:05:33.945371686 +0000 UTC m=+4115.642671102" Oct 14 11:05:34 crc kubenswrapper[4698]: I1014 11:05:34.018310 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:05:34 crc kubenswrapper[4698]: E1014 11:05:34.018584 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:05:35 crc kubenswrapper[4698]: I1014 11:05:35.881474 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:35 crc kubenswrapper[4698]: I1014 11:05:35.882297 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:35 crc kubenswrapper[4698]: I1014 11:05:35.942688 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:45 crc kubenswrapper[4698]: I1014 11:05:45.018096 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:05:45 crc kubenswrapper[4698]: E1014 11:05:45.018939 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:05:45 crc kubenswrapper[4698]: I1014 11:05:45.940831 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:45 crc kubenswrapper[4698]: I1014 11:05:45.994278 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.023477 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lhfm" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="registry-server" containerID="cri-o://6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be" gracePeriod=2 Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.729337 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.797478 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content\") pod \"40166da1-a934-4d82-a348-6410685a91f6\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.797580 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb6xl\" (UniqueName: \"kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl\") pod \"40166da1-a934-4d82-a348-6410685a91f6\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.797633 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities\") pod \"40166da1-a934-4d82-a348-6410685a91f6\" (UID: \"40166da1-a934-4d82-a348-6410685a91f6\") " Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.798936 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities" (OuterVolumeSpecName: "utilities") pod "40166da1-a934-4d82-a348-6410685a91f6" (UID: "40166da1-a934-4d82-a348-6410685a91f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.805335 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl" (OuterVolumeSpecName: "kube-api-access-fb6xl") pod "40166da1-a934-4d82-a348-6410685a91f6" (UID: "40166da1-a934-4d82-a348-6410685a91f6"). InnerVolumeSpecName "kube-api-access-fb6xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.841886 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40166da1-a934-4d82-a348-6410685a91f6" (UID: "40166da1-a934-4d82-a348-6410685a91f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.900556 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.900593 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb6xl\" (UniqueName: \"kubernetes.io/projected/40166da1-a934-4d82-a348-6410685a91f6-kube-api-access-fb6xl\") on node \"crc\" DevicePath \"\"" Oct 14 11:05:46 crc kubenswrapper[4698]: I1014 11:05:46.900605 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40166da1-a934-4d82-a348-6410685a91f6-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.038408 4698 generic.go:334] "Generic (PLEG): container finished" podID="40166da1-a934-4d82-a348-6410685a91f6" containerID="6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be" exitCode=0 Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.038453 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerDied","Data":"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be"} Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.038502 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lhfm" event={"ID":"40166da1-a934-4d82-a348-6410685a91f6","Type":"ContainerDied","Data":"8122cea847bed6e1d8a4a3f312c5a6ac15c123209ce7ca4a12ea628b4efbdaa5"} Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.038526 4698 scope.go:117] "RemoveContainer" containerID="6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.038454 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lhfm" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.080143 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.092632 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lhfm"] Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.093024 4698 scope.go:117] "RemoveContainer" containerID="239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.126813 4698 scope.go:117] "RemoveContainer" containerID="3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.181398 4698 scope.go:117] "RemoveContainer" containerID="6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be" Oct 14 11:05:47 crc kubenswrapper[4698]: E1014 11:05:47.182066 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be\": container with ID starting with 6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be not found: ID does not exist" containerID="6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.182118 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be"} err="failed to get container status \"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be\": rpc error: code = NotFound desc = could not find container \"6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be\": container with ID starting with 6a6d1b3bf0fe25537e13bb28c4ece6611f9fc12cc6753a8a633e4b3ed76879be not found: ID does not exist" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.182154 4698 scope.go:117] "RemoveContainer" containerID="239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8" Oct 14 11:05:47 crc kubenswrapper[4698]: E1014 11:05:47.182514 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8\": container with ID starting with 239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8 not found: ID does not exist" containerID="239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.182549 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8"} err="failed to get container status \"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8\": rpc error: code = NotFound desc = could not find container \"239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8\": container with ID starting with 239127f3d20b9fbafbac757369c7b17f83d3132be07e2107a24892e421a2f5f8 not found: ID does not exist" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.182576 4698 scope.go:117] "RemoveContainer" containerID="3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620" Oct 14 11:05:47 crc kubenswrapper[4698]: E1014 11:05:47.182857 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620\": container with ID starting with 3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620 not found: ID does not exist" containerID="3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620" Oct 14 11:05:47 crc kubenswrapper[4698]: I1014 11:05:47.182875 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620"} err="failed to get container status \"3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620\": rpc error: code = NotFound desc = could not find container \"3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620\": container with ID starting with 3071be081860693b73ae4ba6b478b141a4859d558c032c12faba313b2b3d1620 not found: ID does not exist" Oct 14 11:05:49 crc kubenswrapper[4698]: I1014 11:05:49.027889 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40166da1-a934-4d82-a348-6410685a91f6" path="/var/lib/kubelet/pods/40166da1-a934-4d82-a348-6410685a91f6/volumes" Oct 14 11:05:57 crc kubenswrapper[4698]: I1014 11:05:57.017563 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:05:58 crc kubenswrapper[4698]: I1014 11:05:58.143432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d"} Oct 14 11:08:23 crc kubenswrapper[4698]: I1014 11:08:23.908384 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:08:23 crc kubenswrapper[4698]: I1014 11:08:23.909021 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:08:53 crc kubenswrapper[4698]: I1014 11:08:53.908441 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:08:53 crc kubenswrapper[4698]: I1014 11:08:53.909149 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:09:23 crc kubenswrapper[4698]: I1014 11:09:23.908383 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:09:23 crc kubenswrapper[4698]: I1014 11:09:23.909106 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:09:23 crc kubenswrapper[4698]: I1014 11:09:23.909171 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 11:09:23 crc kubenswrapper[4698]: I1014 11:09:23.910408 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 11:09:23 crc kubenswrapper[4698]: I1014 11:09:23.910482 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d" gracePeriod=600 Oct 14 11:09:25 crc kubenswrapper[4698]: I1014 11:09:25.048375 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d" exitCode=0 Oct 14 11:09:25 crc kubenswrapper[4698]: I1014 11:09:25.048420 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d"} Oct 14 11:09:25 crc kubenswrapper[4698]: I1014 11:09:25.048818 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91"} Oct 14 11:09:25 crc kubenswrapper[4698]: I1014 11:09:25.048846 4698 scope.go:117] "RemoveContainer" containerID="e13df7156182f4e35e76d2c742f1a6fc6d98de7a6237876ea61a27e7c7751402" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.669201 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:16 crc kubenswrapper[4698]: E1014 11:11:16.670209 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="registry-server" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.670223 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="registry-server" Oct 14 11:11:16 crc kubenswrapper[4698]: E1014 11:11:16.670258 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="extract-content" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.670264 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="extract-content" Oct 14 11:11:16 crc kubenswrapper[4698]: E1014 11:11:16.670277 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="extract-utilities" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.670283 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="extract-utilities" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.670483 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="40166da1-a934-4d82-a348-6410685a91f6" containerName="registry-server" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.677979 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.683112 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.739575 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk92c\" (UniqueName: \"kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.739654 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.739675 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.841070 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk92c\" (UniqueName: \"kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.841148 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.841212 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.841808 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.842049 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:16 crc kubenswrapper[4698]: I1014 11:11:16.868617 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk92c\" (UniqueName: \"kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c\") pod \"community-operators-g6pgh\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:17 crc kubenswrapper[4698]: I1014 11:11:17.005295 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:17 crc kubenswrapper[4698]: I1014 11:11:17.669041 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:18 crc kubenswrapper[4698]: I1014 11:11:18.092884 4698 generic.go:334] "Generic (PLEG): container finished" podID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerID="cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1" exitCode=0 Oct 14 11:11:18 crc kubenswrapper[4698]: I1014 11:11:18.092996 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerDied","Data":"cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1"} Oct 14 11:11:18 crc kubenswrapper[4698]: I1014 11:11:18.093206 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerStarted","Data":"089dbf01b833e400f050a34edd32cb2a3e1a94a2d8a6b61b1e7a2b62de37861f"} Oct 14 11:11:18 crc kubenswrapper[4698]: I1014 11:11:18.096974 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 11:11:20 crc kubenswrapper[4698]: I1014 11:11:20.121291 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerStarted","Data":"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f"} Oct 14 11:11:23 crc kubenswrapper[4698]: I1014 11:11:23.152110 4698 generic.go:334] "Generic (PLEG): container finished" podID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerID="2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f" exitCode=0 Oct 14 11:11:23 crc kubenswrapper[4698]: I1014 11:11:23.152314 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerDied","Data":"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f"} Oct 14 11:11:25 crc kubenswrapper[4698]: I1014 11:11:25.180491 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerStarted","Data":"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a"} Oct 14 11:11:25 crc kubenswrapper[4698]: I1014 11:11:25.216426 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6pgh" podStartSLOduration=3.309256597 podStartE2EDuration="9.216363518s" podCreationTimestamp="2025-10-14 11:11:16 +0000 UTC" firstStartedPulling="2025-10-14 11:11:18.09624408 +0000 UTC m=+4459.793543536" lastFinishedPulling="2025-10-14 11:11:24.003351031 +0000 UTC m=+4465.700650457" observedRunningTime="2025-10-14 11:11:25.203804735 +0000 UTC m=+4466.901104171" watchObservedRunningTime="2025-10-14 11:11:25.216363518 +0000 UTC m=+4466.913662944" Oct 14 11:11:27 crc kubenswrapper[4698]: I1014 11:11:27.005679 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:27 crc kubenswrapper[4698]: I1014 11:11:27.005978 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:27 crc kubenswrapper[4698]: I1014 11:11:27.051939 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.061090 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.111076 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.283085 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6pgh" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="registry-server" containerID="cri-o://c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a" gracePeriod=2 Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.821622 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.885665 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content\") pod \"aba2b381-c807-4433-ae86-3f67ec2ad329\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.885851 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities\") pod \"aba2b381-c807-4433-ae86-3f67ec2ad329\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.885936 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk92c\" (UniqueName: \"kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c\") pod \"aba2b381-c807-4433-ae86-3f67ec2ad329\" (UID: \"aba2b381-c807-4433-ae86-3f67ec2ad329\") " Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.886625 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities" (OuterVolumeSpecName: "utilities") pod "aba2b381-c807-4433-ae86-3f67ec2ad329" (UID: "aba2b381-c807-4433-ae86-3f67ec2ad329"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.892439 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c" (OuterVolumeSpecName: "kube-api-access-qk92c") pod "aba2b381-c807-4433-ae86-3f67ec2ad329" (UID: "aba2b381-c807-4433-ae86-3f67ec2ad329"). InnerVolumeSpecName "kube-api-access-qk92c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.945631 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aba2b381-c807-4433-ae86-3f67ec2ad329" (UID: "aba2b381-c807-4433-ae86-3f67ec2ad329"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.987635 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.987681 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk92c\" (UniqueName: \"kubernetes.io/projected/aba2b381-c807-4433-ae86-3f67ec2ad329-kube-api-access-qk92c\") on node \"crc\" DevicePath \"\"" Oct 14 11:11:37 crc kubenswrapper[4698]: I1014 11:11:37.987702 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aba2b381-c807-4433-ae86-3f67ec2ad329-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.310158 4698 generic.go:334] "Generic (PLEG): container finished" podID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerID="c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a" exitCode=0 Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.310203 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerDied","Data":"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a"} Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.310224 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6pgh" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.310237 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6pgh" event={"ID":"aba2b381-c807-4433-ae86-3f67ec2ad329","Type":"ContainerDied","Data":"089dbf01b833e400f050a34edd32cb2a3e1a94a2d8a6b61b1e7a2b62de37861f"} Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.310254 4698 scope.go:117] "RemoveContainer" containerID="c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.340535 4698 scope.go:117] "RemoveContainer" containerID="2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.346128 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.362037 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6pgh"] Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.375934 4698 scope.go:117] "RemoveContainer" containerID="cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.422351 4698 scope.go:117] "RemoveContainer" containerID="c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a" Oct 14 11:11:38 crc kubenswrapper[4698]: E1014 11:11:38.422851 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a\": container with ID starting with c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a not found: ID does not exist" containerID="c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.422927 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a"} err="failed to get container status \"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a\": rpc error: code = NotFound desc = could not find container \"c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a\": container with ID starting with c08cbaaec145036ca01833cc7eb2e7b345c33eca7c1b3b64fc7c0a7e57c00c3a not found: ID does not exist" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.422954 4698 scope.go:117] "RemoveContainer" containerID="2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f" Oct 14 11:11:38 crc kubenswrapper[4698]: E1014 11:11:38.424325 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f\": container with ID starting with 2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f not found: ID does not exist" containerID="2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.424378 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f"} err="failed to get container status \"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f\": rpc error: code = NotFound desc = could not find container \"2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f\": container with ID starting with 2f5f6de97c7b95f38bd1287e144d9e53d15b6fad2dc3c8ea1e59599abf17d40f not found: ID does not exist" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.424395 4698 scope.go:117] "RemoveContainer" containerID="cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1" Oct 14 11:11:38 crc kubenswrapper[4698]: E1014 11:11:38.424838 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1\": container with ID starting with cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1 not found: ID does not exist" containerID="cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1" Oct 14 11:11:38 crc kubenswrapper[4698]: I1014 11:11:38.424865 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1"} err="failed to get container status \"cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1\": rpc error: code = NotFound desc = could not find container \"cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1\": container with ID starting with cbea9e02736e37862ffaf44fd512814409a537d3d0329c7f46bba4a0938465b1 not found: ID does not exist" Oct 14 11:11:39 crc kubenswrapper[4698]: I1014 11:11:39.035698 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" path="/var/lib/kubelet/pods/aba2b381-c807-4433-ae86-3f67ec2ad329/volumes" Oct 14 11:11:53 crc kubenswrapper[4698]: I1014 11:11:53.908457 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:11:53 crc kubenswrapper[4698]: I1014 11:11:53.909191 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:12:23 crc kubenswrapper[4698]: I1014 11:12:23.909567 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:12:23 crc kubenswrapper[4698]: I1014 11:12:23.910127 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:12:53 crc kubenswrapper[4698]: I1014 11:12:53.908528 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:12:53 crc kubenswrapper[4698]: I1014 11:12:53.909171 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:12:53 crc kubenswrapper[4698]: I1014 11:12:53.909236 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 11:12:53 crc kubenswrapper[4698]: I1014 11:12:53.910430 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 11:12:53 crc kubenswrapper[4698]: I1014 11:12:53.910525 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" gracePeriod=600 Oct 14 11:12:54 crc kubenswrapper[4698]: E1014 11:12:54.032495 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.103450 4698 generic.go:334] "Generic (PLEG): container finished" podID="e5a71af4-fdf3-4a49-9ada-2d4836409022" containerID="4e5bec56a9447a709e1ddb26fd106cba4e0c71a850fff4cee80848beff0c4b43" exitCode=0 Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.103539 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a71af4-fdf3-4a49-9ada-2d4836409022","Type":"ContainerDied","Data":"4e5bec56a9447a709e1ddb26fd106cba4e0c71a850fff4cee80848beff0c4b43"} Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.107669 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" exitCode=0 Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.107710 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91"} Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.107780 4698 scope.go:117] "RemoveContainer" containerID="72d65949d3211e933cf524150a27b6612dd1670f9583d0f73908a0123e29a04d" Oct 14 11:12:54 crc kubenswrapper[4698]: I1014 11:12:54.108543 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:12:54 crc kubenswrapper[4698]: E1014 11:12:54.108887 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.559033 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658166 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658297 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658346 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ml2f\" (UniqueName: \"kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658430 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658474 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658503 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.658991 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.664719 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data" (OuterVolumeSpecName: "config-data") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.668113 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.668197 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.668327 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.669043 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config\") pod \"e5a71af4-fdf3-4a49-9ada-2d4836409022\" (UID: \"e5a71af4-fdf3-4a49-9ada-2d4836409022\") " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.670247 4698 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.670275 4698 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.670288 4698 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e5a71af4-fdf3-4a49-9ada-2d4836409022-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.671981 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.672504 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f" (OuterVolumeSpecName: "kube-api-access-8ml2f") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "kube-api-access-8ml2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.692595 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.705336 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.730009 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.732521 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e5a71af4-fdf3-4a49-9ada-2d4836409022" (UID: "e5a71af4-fdf3-4a49-9ada-2d4836409022"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772774 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772817 4698 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772827 4698 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e5a71af4-fdf3-4a49-9ada-2d4836409022-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772837 4698 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5a71af4-fdf3-4a49-9ada-2d4836409022-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772879 4698 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.772890 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ml2f\" (UniqueName: \"kubernetes.io/projected/e5a71af4-fdf3-4a49-9ada-2d4836409022-kube-api-access-8ml2f\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.794887 4698 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 14 11:12:55 crc kubenswrapper[4698]: I1014 11:12:55.875533 4698 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 14 11:12:56 crc kubenswrapper[4698]: I1014 11:12:56.134943 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e5a71af4-fdf3-4a49-9ada-2d4836409022","Type":"ContainerDied","Data":"1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc"} Oct 14 11:12:56 crc kubenswrapper[4698]: I1014 11:12:56.135279 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a375bee406cfd253649624b52b7950e7a18097f02eabc645e7c689870e701fc" Oct 14 11:12:56 crc kubenswrapper[4698]: I1014 11:12:56.135004 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.481944 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 11:13:03 crc kubenswrapper[4698]: E1014 11:13:03.482916 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="registry-server" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.482931 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="registry-server" Oct 14 11:13:03 crc kubenswrapper[4698]: E1014 11:13:03.482949 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="extract-content" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.482955 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="extract-content" Oct 14 11:13:03 crc kubenswrapper[4698]: E1014 11:13:03.482961 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="extract-utilities" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.482967 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="extract-utilities" Oct 14 11:13:03 crc kubenswrapper[4698]: E1014 11:13:03.482982 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a71af4-fdf3-4a49-9ada-2d4836409022" containerName="tempest-tests-tempest-tests-runner" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.482990 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a71af4-fdf3-4a49-9ada-2d4836409022" containerName="tempest-tests-tempest-tests-runner" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.483182 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a71af4-fdf3-4a49-9ada-2d4836409022" containerName="tempest-tests-tempest-tests-runner" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.483212 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba2b381-c807-4433-ae86-3f67ec2ad329" containerName="registry-server" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.483863 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.493986 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.648886 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gjq\" (UniqueName: \"kubernetes.io/projected/89057adf-a70c-48dc-a8fc-65077d5c29d1-kube-api-access-h6gjq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.649032 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.750808 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.751016 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6gjq\" (UniqueName: \"kubernetes.io/projected/89057adf-a70c-48dc-a8fc-65077d5c29d1-kube-api-access-h6gjq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.751370 4698 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.775557 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6gjq\" (UniqueName: \"kubernetes.io/projected/89057adf-a70c-48dc-a8fc-65077d5c29d1-kube-api-access-h6gjq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.781022 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"89057adf-a70c-48dc-a8fc-65077d5c29d1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:03 crc kubenswrapper[4698]: I1014 11:13:03.803557 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 11:13:04 crc kubenswrapper[4698]: I1014 11:13:04.367722 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 11:13:05 crc kubenswrapper[4698]: I1014 11:13:05.232124 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"89057adf-a70c-48dc-a8fc-65077d5c29d1","Type":"ContainerStarted","Data":"baaf129f7e5610348c13b98cba472e0efa08581489adb59b1584a41f17a9e949"} Oct 14 11:13:06 crc kubenswrapper[4698]: I1014 11:13:06.017814 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:13:06 crc kubenswrapper[4698]: E1014 11:13:06.018203 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:13:06 crc kubenswrapper[4698]: I1014 11:13:06.243946 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"89057adf-a70c-48dc-a8fc-65077d5c29d1","Type":"ContainerStarted","Data":"1294ffa570a3a6a8608bfefb45ca713addb315fb6618d018c121fda88e8d89d6"} Oct 14 11:13:06 crc kubenswrapper[4698]: I1014 11:13:06.260581 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.357488767 podStartE2EDuration="3.26055552s" podCreationTimestamp="2025-10-14 11:13:03 +0000 UTC" firstStartedPulling="2025-10-14 11:13:04.378400992 +0000 UTC m=+4566.075700408" lastFinishedPulling="2025-10-14 11:13:05.281467735 +0000 UTC m=+4566.978767161" observedRunningTime="2025-10-14 11:13:06.258193351 +0000 UTC m=+4567.955492767" watchObservedRunningTime="2025-10-14 11:13:06.26055552 +0000 UTC m=+4567.957854966" Oct 14 11:13:21 crc kubenswrapper[4698]: I1014 11:13:21.017405 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:13:21 crc kubenswrapper[4698]: E1014 11:13:21.018273 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.295225 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5xv/must-gather-g4wn2"] Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.298447 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.304135 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd5xv"/"kube-root-ca.crt" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.304208 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd5xv"/"default-dockercfg-9slxm" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.304303 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd5xv"/"openshift-service-ca.crt" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.321522 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd5xv/must-gather-g4wn2"] Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.472141 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4cm\" (UniqueName: \"kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.472860 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.575457 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.575571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4cm\" (UniqueName: \"kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.576043 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.611915 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q4cm\" (UniqueName: \"kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm\") pod \"must-gather-g4wn2\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:24 crc kubenswrapper[4698]: I1014 11:13:24.636922 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:13:25 crc kubenswrapper[4698]: I1014 11:13:25.132888 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd5xv/must-gather-g4wn2"] Oct 14 11:13:25 crc kubenswrapper[4698]: I1014 11:13:25.487013 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" event={"ID":"bdc46bca-9ee2-4b01-8713-11880ff4360a","Type":"ContainerStarted","Data":"edee55c0bd3f6b93cbac590d2af96ea0afaf99ee170a26f1a19d9a391ef6fe69"} Oct 14 11:13:31 crc kubenswrapper[4698]: I1014 11:13:31.544476 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" event={"ID":"bdc46bca-9ee2-4b01-8713-11880ff4360a","Type":"ContainerStarted","Data":"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4"} Oct 14 11:13:32 crc kubenswrapper[4698]: I1014 11:13:32.017718 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:13:32 crc kubenswrapper[4698]: E1014 11:13:32.018247 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:13:32 crc kubenswrapper[4698]: I1014 11:13:32.562337 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" event={"ID":"bdc46bca-9ee2-4b01-8713-11880ff4360a","Type":"ContainerStarted","Data":"2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48"} Oct 14 11:13:32 crc kubenswrapper[4698]: I1014 11:13:32.592112 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" podStartSLOduration=2.924013183 podStartE2EDuration="8.592066747s" podCreationTimestamp="2025-10-14 11:13:24 +0000 UTC" firstStartedPulling="2025-10-14 11:13:25.136264823 +0000 UTC m=+4586.833564239" lastFinishedPulling="2025-10-14 11:13:30.804318387 +0000 UTC m=+4592.501617803" observedRunningTime="2025-10-14 11:13:32.582704015 +0000 UTC m=+4594.280003441" watchObservedRunningTime="2025-10-14 11:13:32.592066747 +0000 UTC m=+4594.289366173" Oct 14 11:13:36 crc kubenswrapper[4698]: E1014 11:13:36.608505 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:54376->38.102.83.188:44569: write tcp 38.102.83.188:54376->38.102.83.188:44569: write: broken pipe Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.482269 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-frcg6"] Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.484183 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.567259 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.567336 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v54n\" (UniqueName: \"kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.669755 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.669845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v54n\" (UniqueName: \"kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.669947 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.697164 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v54n\" (UniqueName: \"kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n\") pod \"crc-debug-frcg6\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:37 crc kubenswrapper[4698]: I1014 11:13:37.813061 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:13:38 crc kubenswrapper[4698]: I1014 11:13:38.622754 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" event={"ID":"fa9f73f1-2316-49ec-bf34-aafced101db2","Type":"ContainerStarted","Data":"41363697cc0f8bebadcea6818d5a248ac64418af4f104f915d41e05ccf644b2b"} Oct 14 11:13:44 crc kubenswrapper[4698]: I1014 11:13:44.018316 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:13:44 crc kubenswrapper[4698]: E1014 11:13:44.019093 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:13:49 crc kubenswrapper[4698]: I1014 11:13:49.730496 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" event={"ID":"fa9f73f1-2316-49ec-bf34-aafced101db2","Type":"ContainerStarted","Data":"5a07c5fc7b8b686870392ba81a17c28ac2cbdf7158d1be798c1cdfc181b7330f"} Oct 14 11:13:49 crc kubenswrapper[4698]: I1014 11:13:49.755024 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" podStartSLOduration=2.165298645 podStartE2EDuration="12.754997189s" podCreationTimestamp="2025-10-14 11:13:37 +0000 UTC" firstStartedPulling="2025-10-14 11:13:37.871856091 +0000 UTC m=+4599.569155497" lastFinishedPulling="2025-10-14 11:13:48.461554625 +0000 UTC m=+4610.158854041" observedRunningTime="2025-10-14 11:13:49.744172144 +0000 UTC m=+4611.441471630" watchObservedRunningTime="2025-10-14 11:13:49.754997189 +0000 UTC m=+4611.452296605" Oct 14 11:13:56 crc kubenswrapper[4698]: I1014 11:13:56.017983 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:13:56 crc kubenswrapper[4698]: E1014 11:13:56.018984 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:14:11 crc kubenswrapper[4698]: I1014 11:14:11.018192 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:14:11 crc kubenswrapper[4698]: E1014 11:14:11.019248 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:14:25 crc kubenswrapper[4698]: I1014 11:14:25.016855 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:14:25 crc kubenswrapper[4698]: E1014 11:14:25.017550 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:14:37 crc kubenswrapper[4698]: I1014 11:14:37.169169 4698 generic.go:334] "Generic (PLEG): container finished" podID="fa9f73f1-2316-49ec-bf34-aafced101db2" containerID="5a07c5fc7b8b686870392ba81a17c28ac2cbdf7158d1be798c1cdfc181b7330f" exitCode=0 Oct 14 11:14:37 crc kubenswrapper[4698]: I1014 11:14:37.169256 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" event={"ID":"fa9f73f1-2316-49ec-bf34-aafced101db2","Type":"ContainerDied","Data":"5a07c5fc7b8b686870392ba81a17c28ac2cbdf7158d1be798c1cdfc181b7330f"} Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.314829 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.345338 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-frcg6"] Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.355220 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-frcg6"] Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.388137 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host\") pod \"fa9f73f1-2316-49ec-bf34-aafced101db2\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.388296 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host" (OuterVolumeSpecName: "host") pod "fa9f73f1-2316-49ec-bf34-aafced101db2" (UID: "fa9f73f1-2316-49ec-bf34-aafced101db2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.388880 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v54n\" (UniqueName: \"kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n\") pod \"fa9f73f1-2316-49ec-bf34-aafced101db2\" (UID: \"fa9f73f1-2316-49ec-bf34-aafced101db2\") " Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.389528 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa9f73f1-2316-49ec-bf34-aafced101db2-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.405015 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n" (OuterVolumeSpecName: "kube-api-access-2v54n") pod "fa9f73f1-2316-49ec-bf34-aafced101db2" (UID: "fa9f73f1-2316-49ec-bf34-aafced101db2"). InnerVolumeSpecName "kube-api-access-2v54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:14:38 crc kubenswrapper[4698]: I1014 11:14:38.491001 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v54n\" (UniqueName: \"kubernetes.io/projected/fa9f73f1-2316-49ec-bf34-aafced101db2-kube-api-access-2v54n\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.028270 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9f73f1-2316-49ec-bf34-aafced101db2" path="/var/lib/kubelet/pods/fa9f73f1-2316-49ec-bf34-aafced101db2/volumes" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.190728 4698 scope.go:117] "RemoveContainer" containerID="5a07c5fc7b8b686870392ba81a17c28ac2cbdf7158d1be798c1cdfc181b7330f" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.190878 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-frcg6" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.538855 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-5z4bv"] Oct 14 11:14:39 crc kubenswrapper[4698]: E1014 11:14:39.539334 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9f73f1-2316-49ec-bf34-aafced101db2" containerName="container-00" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.539351 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9f73f1-2316-49ec-bf34-aafced101db2" containerName="container-00" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.539606 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9f73f1-2316-49ec-bf34-aafced101db2" containerName="container-00" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.540397 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.614247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.614303 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkl6s\" (UniqueName: \"kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.720145 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.720210 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkl6s\" (UniqueName: \"kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.720632 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.740950 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkl6s\" (UniqueName: \"kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s\") pod \"crc-debug-5z4bv\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:39 crc kubenswrapper[4698]: I1014 11:14:39.860038 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:40 crc kubenswrapper[4698]: I1014 11:14:40.017040 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:14:40 crc kubenswrapper[4698]: E1014 11:14:40.017357 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:14:40 crc kubenswrapper[4698]: I1014 11:14:40.203058 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" event={"ID":"d9cbbc60-e715-4365-91a5-7797f75c9749","Type":"ContainerStarted","Data":"0f39036a1471e443e6fcae9ce8994c792c0e5ddaf728620de57c70b2776f3d2f"} Oct 14 11:14:40 crc kubenswrapper[4698]: I1014 11:14:40.203108 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" event={"ID":"d9cbbc60-e715-4365-91a5-7797f75c9749","Type":"ContainerStarted","Data":"8d9698ec22cae65b60cb54c1fc226ee96b22779ffe22381b34363cf77303dd1d"} Oct 14 11:14:40 crc kubenswrapper[4698]: I1014 11:14:40.228638 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" podStartSLOduration=1.228613032 podStartE2EDuration="1.228613032s" podCreationTimestamp="2025-10-14 11:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 11:14:40.219067304 +0000 UTC m=+4661.916366720" watchObservedRunningTime="2025-10-14 11:14:40.228613032 +0000 UTC m=+4661.925912448" Oct 14 11:14:41 crc kubenswrapper[4698]: I1014 11:14:41.218851 4698 generic.go:334] "Generic (PLEG): container finished" podID="d9cbbc60-e715-4365-91a5-7797f75c9749" containerID="0f39036a1471e443e6fcae9ce8994c792c0e5ddaf728620de57c70b2776f3d2f" exitCode=0 Oct 14 11:14:41 crc kubenswrapper[4698]: I1014 11:14:41.219097 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" event={"ID":"d9cbbc60-e715-4365-91a5-7797f75c9749","Type":"ContainerDied","Data":"0f39036a1471e443e6fcae9ce8994c792c0e5ddaf728620de57c70b2776f3d2f"} Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.351128 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.471751 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host\") pod \"d9cbbc60-e715-4365-91a5-7797f75c9749\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.471827 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host" (OuterVolumeSpecName: "host") pod "d9cbbc60-e715-4365-91a5-7797f75c9749" (UID: "d9cbbc60-e715-4365-91a5-7797f75c9749"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.471850 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkl6s\" (UniqueName: \"kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s\") pod \"d9cbbc60-e715-4365-91a5-7797f75c9749\" (UID: \"d9cbbc60-e715-4365-91a5-7797f75c9749\") " Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.472446 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9cbbc60-e715-4365-91a5-7797f75c9749-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.481656 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s" (OuterVolumeSpecName: "kube-api-access-vkl6s") pod "d9cbbc60-e715-4365-91a5-7797f75c9749" (UID: "d9cbbc60-e715-4365-91a5-7797f75c9749"). InnerVolumeSpecName "kube-api-access-vkl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:14:42 crc kubenswrapper[4698]: I1014 11:14:42.574886 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkl6s\" (UniqueName: \"kubernetes.io/projected/d9cbbc60-e715-4365-91a5-7797f75c9749-kube-api-access-vkl6s\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:43 crc kubenswrapper[4698]: I1014 11:14:43.221567 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-5z4bv"] Oct 14 11:14:43 crc kubenswrapper[4698]: I1014 11:14:43.233184 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-5z4bv"] Oct 14 11:14:43 crc kubenswrapper[4698]: I1014 11:14:43.243287 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9698ec22cae65b60cb54c1fc226ee96b22779ffe22381b34363cf77303dd1d" Oct 14 11:14:43 crc kubenswrapper[4698]: I1014 11:14:43.243344 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-5z4bv" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.613964 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-7jv48"] Oct 14 11:14:44 crc kubenswrapper[4698]: E1014 11:14:44.614680 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cbbc60-e715-4365-91a5-7797f75c9749" containerName="container-00" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.614692 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cbbc60-e715-4365-91a5-7797f75c9749" containerName="container-00" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.614942 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cbbc60-e715-4365-91a5-7797f75c9749" containerName="container-00" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.615634 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.660117 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gz4\" (UniqueName: \"kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.660802 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.762748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gz4\" (UniqueName: \"kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.763067 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.763151 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.786493 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gz4\" (UniqueName: \"kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4\") pod \"crc-debug-7jv48\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: I1014 11:14:44.941381 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:44 crc kubenswrapper[4698]: W1014 11:14:44.982030 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05999876_eefc_4745_8fa0_b83c6993486d.slice/crio-2d91ece02a62cd16664218096918c2d329694e934f54d9c60152fbb6c1bc924f WatchSource:0}: Error finding container 2d91ece02a62cd16664218096918c2d329694e934f54d9c60152fbb6c1bc924f: Status 404 returned error can't find the container with id 2d91ece02a62cd16664218096918c2d329694e934f54d9c60152fbb6c1bc924f Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.027802 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9cbbc60-e715-4365-91a5-7797f75c9749" path="/var/lib/kubelet/pods/d9cbbc60-e715-4365-91a5-7797f75c9749/volumes" Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.262335 4698 generic.go:334] "Generic (PLEG): container finished" podID="05999876-eefc-4745-8fa0-b83c6993486d" containerID="49e5dd6ecbb07fd1fc1ac480c42bf7075f3da73c9a9652f452902ca5c55feede" exitCode=0 Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.262381 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" event={"ID":"05999876-eefc-4745-8fa0-b83c6993486d","Type":"ContainerDied","Data":"49e5dd6ecbb07fd1fc1ac480c42bf7075f3da73c9a9652f452902ca5c55feede"} Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.262415 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" event={"ID":"05999876-eefc-4745-8fa0-b83c6993486d","Type":"ContainerStarted","Data":"2d91ece02a62cd16664218096918c2d329694e934f54d9c60152fbb6c1bc924f"} Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.299831 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-7jv48"] Oct 14 11:14:45 crc kubenswrapper[4698]: I1014 11:14:45.307632 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5xv/crc-debug-7jv48"] Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.117717 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66df6b94fb-sw6kf_35f476fe-d3af-4e73-bb7e-ff6a4919ccf7/barbican-api/0.log" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.123377 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66df6b94fb-sw6kf_35f476fe-d3af-4e73-bb7e-ff6a4919ccf7/barbican-api-log/0.log" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.374658 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77cb48f668-xz2r9_3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606/barbican-keystone-listener/0.log" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.413559 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.493096 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host\") pod \"05999876-eefc-4745-8fa0-b83c6993486d\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.493401 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64gz4\" (UniqueName: \"kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4\") pod \"05999876-eefc-4745-8fa0-b83c6993486d\" (UID: \"05999876-eefc-4745-8fa0-b83c6993486d\") " Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.493800 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host" (OuterVolumeSpecName: "host") pod "05999876-eefc-4745-8fa0-b83c6993486d" (UID: "05999876-eefc-4745-8fa0-b83c6993486d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.502280 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4" (OuterVolumeSpecName: "kube-api-access-64gz4") pod "05999876-eefc-4745-8fa0-b83c6993486d" (UID: "05999876-eefc-4745-8fa0-b83c6993486d"). InnerVolumeSpecName "kube-api-access-64gz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.594673 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05999876-eefc-4745-8fa0-b83c6993486d-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.594703 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64gz4\" (UniqueName: \"kubernetes.io/projected/05999876-eefc-4745-8fa0-b83c6993486d-kube-api-access-64gz4\") on node \"crc\" DevicePath \"\"" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.609693 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bfd556cc-6z8fb_27f5b9bc-1a92-40a7-b615-7c8a726cd2e8/barbican-worker/0.log" Oct 14 11:14:46 crc kubenswrapper[4698]: I1014 11:14:46.689249 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bfd556cc-6z8fb_27f5b9bc-1a92-40a7-b615-7c8a726cd2e8/barbican-worker-log/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.005989 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77cb48f668-xz2r9_3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606/barbican-keystone-listener-log/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.028677 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05999876-eefc-4745-8fa0-b83c6993486d" path="/var/lib/kubelet/pods/05999876-eefc-4745-8fa0-b83c6993486d/volumes" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.091035 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr_d4559dff-03d5-4c1b-a8df-f8fc0ae935de/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.258254 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/ceilometer-notification-agent/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.258992 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/ceilometer-central-agent/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.280483 4698 scope.go:117] "RemoveContainer" containerID="49e5dd6ecbb07fd1fc1ac480c42bf7075f3da73c9a9652f452902ca5c55feede" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.280555 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/crc-debug-7jv48" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.326436 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/proxy-httpd/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.382008 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/sg-core/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.679743 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_fab31a39-0774-45d5-a5cd-cc337066aa80/ceph/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.832281 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_78a024b7-16f4-4177-8b52-0cecbc173247/cinder-api/0.log" Oct 14 11:14:47 crc kubenswrapper[4698]: I1014 11:14:47.915438 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_78a024b7-16f4-4177-8b52-0cecbc173247/cinder-api-log/0.log" Oct 14 11:14:48 crc kubenswrapper[4698]: I1014 11:14:48.645272 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a03e3bf1-857d-4f91-ad0e-254605774e3c/probe/0.log" Oct 14 11:14:48 crc kubenswrapper[4698]: I1014 11:14:48.665363 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef857e49-6a95-4e1c-a170-a9b7cf5b095f/cinder-scheduler/0.log" Oct 14 11:14:49 crc kubenswrapper[4698]: I1014 11:14:49.080363 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef857e49-6a95-4e1c-a170-a9b7cf5b095f/probe/0.log" Oct 14 11:14:49 crc kubenswrapper[4698]: I1014 11:14:49.265621 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_07b37a90-cc29-48f1-9da0-d2b0a9fc6d85/probe/0.log" Oct 14 11:14:49 crc kubenswrapper[4698]: I1014 11:14:49.355685 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb_43529126-1bd9-4a80-bf14-99b218ef939c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:49 crc kubenswrapper[4698]: I1014 11:14:49.708577 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4_601bc78a-d499-4391-ada7-44e34c35c547/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.262799 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx_57bb4dc3-77b1-43e2-9360-c2f0d7354f4f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.285795 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/init/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.372905 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a03e3bf1-857d-4f91-ad0e-254605774e3c/cinder-backup/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.512549 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/init/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.856982 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zln6c_0e135199-5913-440f-a291-4252ae734b96/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:50 crc kubenswrapper[4698]: I1014 11:14:50.956595 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/dnsmasq-dns/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.059914 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a46350f-38b2-4150-aef2-6c2a336a22f9/glance-httpd/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.080823 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a46350f-38b2-4150-aef2-6c2a336a22f9/glance-log/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.296357 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d72ae1c-cd0b-42d9-b438-c80428436dd3/glance-httpd/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.354647 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d72ae1c-cd0b-42d9-b438-c80428436dd3/glance-log/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.624965 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb_1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.660196 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cf95ddffb-6h2bm_746d0a6a-4df6-40b6-9600-63ec14336507/horizon/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.737413 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_07b37a90-cc29-48f1-9da0-d2b0a9fc6d85/cinder-volume/0.log" Oct 14 11:14:51 crc kubenswrapper[4698]: I1014 11:14:51.923031 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6hw2q_06e79464-f4ba-47d3-a98d-d75709932309/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:52 crc kubenswrapper[4698]: I1014 11:14:52.111721 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cf95ddffb-6h2bm_746d0a6a-4df6-40b6-9600-63ec14336507/horizon-log/0.log" Oct 14 11:14:52 crc kubenswrapper[4698]: I1014 11:14:52.152274 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340661-dn2xz_9f7aded7-281a-4d4b-ab0d-7e52eda65441/keystone-cron/0.log" Oct 14 11:14:52 crc kubenswrapper[4698]: I1014 11:14:52.383019 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e4f715a0-2f1f-4831-a8ce-a629264ac73f/kube-state-metrics/0.log" Oct 14 11:14:52 crc kubenswrapper[4698]: I1014 11:14:52.427433 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl_141d36f8-e9f9-4959-8f0c-09c649350547/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:52 crc kubenswrapper[4698]: I1014 11:14:52.929918 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b654944c-c016-4506-8ee0-2b23eeafcaca/probe/0.log" Oct 14 11:14:53 crc kubenswrapper[4698]: I1014 11:14:53.075327 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_99f5e356-0b01-4991-b2b2-3e0456eba2e7/manila-api/0.log" Oct 14 11:14:53 crc kubenswrapper[4698]: I1014 11:14:53.123907 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b654944c-c016-4506-8ee0-2b23eeafcaca/manila-scheduler/0.log" Oct 14 11:14:53 crc kubenswrapper[4698]: I1014 11:14:53.377174 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce/probe/0.log" Oct 14 11:14:53 crc kubenswrapper[4698]: I1014 11:14:53.796805 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_99f5e356-0b01-4991-b2b2-3e0456eba2e7/manila-api-log/0.log" Oct 14 11:14:53 crc kubenswrapper[4698]: I1014 11:14:53.808263 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce/manila-share/0.log" Oct 14 11:14:54 crc kubenswrapper[4698]: I1014 11:14:54.495716 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6_9f3eaa62-6c1e-406d-acec-135973addacf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:54 crc kubenswrapper[4698]: I1014 11:14:54.860187 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cf664b6c9-t6wfc_0082817f-4bcf-434b-8fb7-1e8ae2acf058/neutron-httpd/0.log" Oct 14 11:14:55 crc kubenswrapper[4698]: I1014 11:14:55.016725 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:14:55 crc kubenswrapper[4698]: E1014 11:14:55.017096 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:14:55 crc kubenswrapper[4698]: I1014 11:14:55.579393 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cf664b6c9-t6wfc_0082817f-4bcf-434b-8fb7-1e8ae2acf058/neutron-api/0.log" Oct 14 11:14:56 crc kubenswrapper[4698]: I1014 11:14:56.174259 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75d9cb9c4-g8g58_fddbac4f-ca34-45b0-913b-21e399aab117/keystone-api/0.log" Oct 14 11:14:56 crc kubenswrapper[4698]: I1014 11:14:56.574941 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_636dfb32-7180-4af9-9de0-57745de8c7e7/nova-cell0-conductor-conductor/0.log" Oct 14 11:14:56 crc kubenswrapper[4698]: I1014 11:14:56.795484 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e4bcef82-1d46-45b0-b831-7c575c80b1f4/nova-cell1-conductor-conductor/0.log" Oct 14 11:14:57 crc kubenswrapper[4698]: I1014 11:14:57.135639 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f114cc4a-8234-441d-926f-83ac36f9ff5b/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 11:14:57 crc kubenswrapper[4698]: I1014 11:14:57.316615 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pt8b6_b25db8a8-2e32-4634-b5e6-b21d7497c0ca/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:14:57 crc kubenswrapper[4698]: I1014 11:14:57.358921 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066120b9-3158-4234-873d-178f6b65885c/nova-api-log/0.log" Oct 14 11:14:57 crc kubenswrapper[4698]: I1014 11:14:57.627295 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28c60a34-a183-4fd8-a84e-2963e6676914/nova-metadata-log/0.log" Oct 14 11:14:57 crc kubenswrapper[4698]: I1014 11:14:57.861829 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066120b9-3158-4234-873d-178f6b65885c/nova-api-api/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.184248 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/mysql-bootstrap/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.349632 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/mysql-bootstrap/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.414475 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/galera/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.563646 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0ec9e017-c819-42ce-8a1f-73b89dfa0459/nova-scheduler-scheduler/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.648910 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/mysql-bootstrap/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.794925 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/mysql-bootstrap/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.818154 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/galera/0.log" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.937737 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:14:58 crc kubenswrapper[4698]: E1014 11:14:58.938348 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05999876-eefc-4745-8fa0-b83c6993486d" containerName="container-00" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.938362 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="05999876-eefc-4745-8fa0-b83c6993486d" containerName="container-00" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.938554 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="05999876-eefc-4745-8fa0-b83c6993486d" containerName="container-00" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.940147 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:58 crc kubenswrapper[4698]: I1014 11:14:58.961545 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.051640 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9b9ad197-b532-42c9-8ac2-c822cca96a52/openstackclient/0.log" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.051829 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.051958 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kkx\" (UniqueName: \"kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.052002 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.153635 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.153853 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kkx\" (UniqueName: \"kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.153918 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.156348 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.156900 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.175268 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kkx\" (UniqueName: \"kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx\") pod \"redhat-operators-km97b\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.225583 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-24vqt_b64163c4-e040-4bec-a585-c55f9d05e948/ovn-controller/0.log" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.266564 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.419962 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m8cgb_47ef312e-a1ef-4635-a052-31f0b3a7e742/openstack-network-exporter/0.log" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.672911 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server-init/0.log" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.703095 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28c60a34-a183-4fd8-a84e-2963e6676914/nova-metadata-metadata/0.log" Oct 14 11:14:59 crc kubenswrapper[4698]: I1014 11:14:59.836048 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.166975 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws"] Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.168570 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.172857 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.173116 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.208195 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws"] Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.272953 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.273034 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbt4b\" (UniqueName: \"kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.273237 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.374571 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.374638 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbt4b\" (UniqueName: \"kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.374778 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.380139 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.393981 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.401094 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbt4b\" (UniqueName: \"kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b\") pod \"collect-profiles-29340675-l8tws\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.436592 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerStarted","Data":"c96033c69f9011a17db53fdfd959674be73a938104a2a6d95b9ebf3736f84131"} Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.478574 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovs-vswitchd/0.log" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.506914 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server/0.log" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.599955 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.623440 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server-init/0.log" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.842670 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qwfqd_464b6c8a-27cc-4899-a7ed-5e2d022e91da/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.897662 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b35b471e-f011-42c9-998a-d23ec21ad1a9/openstack-network-exporter/0.log" Oct 14 11:15:00 crc kubenswrapper[4698]: I1014 11:15:00.985577 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b35b471e-f011-42c9-998a-d23ec21ad1a9/ovn-northd/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.078732 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_884f9a07-9f80-44ff-a1e5-805d6d5ef6fb/ovsdbserver-nb/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.126069 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws"] Oct 14 11:15:01 crc kubenswrapper[4698]: W1014 11:15:01.128970 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2b430f2_7d0c_4c2a_a868_ac266dc5ab2b.slice/crio-898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51 WatchSource:0}: Error finding container 898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51: Status 404 returned error can't find the container with id 898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51 Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.145630 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_884f9a07-9f80-44ff-a1e5-805d6d5ef6fb/openstack-network-exporter/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.335578 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_468f15c4-08a4-4e2e-a65d-7a679b1d3a3f/openstack-network-exporter/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.408884 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_468f15c4-08a4-4e2e-a65d-7a679b1d3a3f/ovsdbserver-sb/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.456222 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" event={"ID":"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b","Type":"ContainerStarted","Data":"898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51"} Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.458577 4698 generic.go:334] "Generic (PLEG): container finished" podID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerID="03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63" exitCode=0 Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.458620 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerDied","Data":"03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63"} Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.543806 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.546109 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.580448 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.706106 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.706192 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwqz\" (UniqueName: \"kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.706225 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.810591 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.810748 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.810847 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzwqz\" (UniqueName: \"kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.811580 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.811616 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.835721 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzwqz\" (UniqueName: \"kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz\") pod \"redhat-marketplace-mgrtw\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.896554 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/setup-container/0.log" Oct 14 11:15:01 crc kubenswrapper[4698]: I1014 11:15:01.906902 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.266263 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/setup-container/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.297759 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dd765df5b-xsd5h_25021023-544e-4b23-947b-66102dcf790e/placement-api/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.324403 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/rabbitmq/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.438146 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.470825 4698 generic.go:334] "Generic (PLEG): container finished" podID="e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" containerID="ef70366a027022df3553a292b077e067f3f09c269d4c41c9bb1e41c8ed3420cc" exitCode=0 Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.470875 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" event={"ID":"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b","Type":"ContainerDied","Data":"ef70366a027022df3553a292b077e067f3f09c269d4c41c9bb1e41c8ed3420cc"} Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.482246 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dd765df5b-xsd5h_25021023-544e-4b23-947b-66102dcf790e/placement-log/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.564278 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/setup-container/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.781137 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/setup-container/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.838145 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/rabbitmq/0.log" Oct 14 11:15:02 crc kubenswrapper[4698]: I1014 11:15:02.846844 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8_310c1648-fc92-4008-8e7c-ff410b890a2b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.011537 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-jdls2_06b2a1a6-bc42-4191-9ab7-62c064090d6b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.068954 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw_3e24ecfd-2fed-4c41-be7f-89fe09f13724/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.224387 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xc794_374455db-3111-424a-82eb-0960266ac879/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.365783 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-49gfb_86317787-19aa-4ea7-a4ff-3e604d9c0497/ssh-known-hosts-edpm-deployment/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.480376 4698 generic.go:334] "Generic (PLEG): container finished" podID="92c17f31-3648-405d-9523-6426c6085614" containerID="e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f" exitCode=0 Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.481660 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerDied","Data":"e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f"} Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.481789 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerStarted","Data":"afb2f591701416b7085a0a4bea0fad1c3ff284237661ad352c70bfdb25382cac"} Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.484912 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerStarted","Data":"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52"} Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.619411 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58759987c5-vr6vx_3a1278dc-c5df-49ed-8c8e-6284281cf240/proxy-server/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.823601 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58759987c5-vr6vx_3a1278dc-c5df-49ed-8c8e-6284281cf240/proxy-httpd/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.852635 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-fmnnt_332a15eb-0ada-4f42-a34e-a7d2e9c46af2/swift-ring-rebalance/0.log" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.902810 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:03 crc kubenswrapper[4698]: I1014 11:15:03.931588 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-auditor/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.064360 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume\") pod \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.064808 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbt4b\" (UniqueName: \"kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b\") pod \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.065700 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume\") pod \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\" (UID: \"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b\") " Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.067071 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" (UID: "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.075643 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" (UID: "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.094998 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b" (OuterVolumeSpecName: "kube-api-access-wbt4b") pod "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" (UID: "e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b"). InnerVolumeSpecName "kube-api-access-wbt4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.095810 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-reaper/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.167796 4698 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.167828 4698 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.167838 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbt4b\" (UniqueName: \"kubernetes.io/projected/e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b-kube-api-access-wbt4b\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.194898 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-replicator/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.248285 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-auditor/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.252616 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-server/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.476362 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-replicator/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.531103 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" event={"ID":"e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b","Type":"ContainerDied","Data":"898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51"} Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.531158 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898b2fc45e17a779ac02bf18fb3e5823aa805a97f78e674302ecc12e5a1c8d51" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.531114 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340675-l8tws" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.558236 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-server/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.558997 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-updater/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.578405 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-auditor/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.708903 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-expirer/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.786379 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-replicator/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.792122 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-updater/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.807084 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-server/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.924064 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/rsync/0.log" Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.984970 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d"] Oct 14 11:15:04 crc kubenswrapper[4698]: I1014 11:15:04.993555 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340630-mmb2d"] Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.037994 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef3450d-a85d-4fed-a424-1a19143c8845" path="/var/lib/kubelet/pods/8ef3450d-a85d-4fed-a424-1a19143c8845/volumes" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.044491 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/swift-recon-cron/0.log" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.072422 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pstgf_45519f65-bf50-47f3-a645-8d64d05ab523/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.316451 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_89057adf-a70c-48dc-a8fc-65077d5c29d1/test-operator-logs-container/0.log" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.365707 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e5a71af4-fdf3-4a49-9ada-2d4836409022/tempest-tests-tempest-tests-runner/0.log" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.516363 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7ngns_fc38db3e-e819-4f43-a14a-c83162ceb5fa/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.541288 4698 generic.go:334] "Generic (PLEG): container finished" podID="92c17f31-3648-405d-9523-6426c6085614" containerID="20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77" exitCode=0 Oct 14 11:15:05 crc kubenswrapper[4698]: I1014 11:15:05.541356 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerDied","Data":"20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77"} Oct 14 11:15:06 crc kubenswrapper[4698]: I1014 11:15:06.561642 4698 generic.go:334] "Generic (PLEG): container finished" podID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerID="aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52" exitCode=0 Oct 14 11:15:06 crc kubenswrapper[4698]: I1014 11:15:06.562648 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerDied","Data":"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52"} Oct 14 11:15:07 crc kubenswrapper[4698]: I1014 11:15:07.584672 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerStarted","Data":"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8"} Oct 14 11:15:07 crc kubenswrapper[4698]: I1014 11:15:07.589247 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerStarted","Data":"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3"} Oct 14 11:15:07 crc kubenswrapper[4698]: I1014 11:15:07.618652 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgrtw" podStartSLOduration=3.6442220069999998 podStartE2EDuration="6.618633505s" podCreationTimestamp="2025-10-14 11:15:01 +0000 UTC" firstStartedPulling="2025-10-14 11:15:03.484095323 +0000 UTC m=+4685.181394739" lastFinishedPulling="2025-10-14 11:15:06.458506821 +0000 UTC m=+4688.155806237" observedRunningTime="2025-10-14 11:15:07.614037961 +0000 UTC m=+4689.311337377" watchObservedRunningTime="2025-10-14 11:15:07.618633505 +0000 UTC m=+4689.315932921" Oct 14 11:15:07 crc kubenswrapper[4698]: I1014 11:15:07.634590 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-km97b" podStartSLOduration=3.9049482319999997 podStartE2EDuration="9.634571629s" podCreationTimestamp="2025-10-14 11:14:58 +0000 UTC" firstStartedPulling="2025-10-14 11:15:01.461241289 +0000 UTC m=+4683.158540705" lastFinishedPulling="2025-10-14 11:15:07.190864686 +0000 UTC m=+4688.888164102" observedRunningTime="2025-10-14 11:15:07.63392505 +0000 UTC m=+4689.331224466" watchObservedRunningTime="2025-10-14 11:15:07.634571629 +0000 UTC m=+4689.331871045" Oct 14 11:15:09 crc kubenswrapper[4698]: I1014 11:15:09.268695 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:09 crc kubenswrapper[4698]: I1014 11:15:09.269034 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:10 crc kubenswrapper[4698]: I1014 11:15:10.017277 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:15:10 crc kubenswrapper[4698]: E1014 11:15:10.017914 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:15:10 crc kubenswrapper[4698]: I1014 11:15:10.333147 4698 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-km97b" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="registry-server" probeResult="failure" output=< Oct 14 11:15:10 crc kubenswrapper[4698]: timeout: failed to connect service ":50051" within 1s Oct 14 11:15:10 crc kubenswrapper[4698]: > Oct 14 11:15:11 crc kubenswrapper[4698]: I1014 11:15:11.907351 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:11 crc kubenswrapper[4698]: I1014 11:15:11.907724 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:11 crc kubenswrapper[4698]: I1014 11:15:11.977547 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:12 crc kubenswrapper[4698]: I1014 11:15:12.277622 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e3bc7f78-d69f-426c-9aeb-4837d25635ab/memcached/0.log" Oct 14 11:15:12 crc kubenswrapper[4698]: I1014 11:15:12.432609 4698 scope.go:117] "RemoveContainer" containerID="269d4c0281b54d453accf3b80550d63c37aa11acdb476b310d3cbdba921c5cff" Oct 14 11:15:12 crc kubenswrapper[4698]: I1014 11:15:12.682118 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:13 crc kubenswrapper[4698]: I1014 11:15:13.520583 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:14 crc kubenswrapper[4698]: I1014 11:15:14.670348 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mgrtw" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="registry-server" containerID="cri-o://b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8" gracePeriod=2 Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.214148 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.307627 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content\") pod \"92c17f31-3648-405d-9523-6426c6085614\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.308041 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities\") pod \"92c17f31-3648-405d-9523-6426c6085614\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.308332 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzwqz\" (UniqueName: \"kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz\") pod \"92c17f31-3648-405d-9523-6426c6085614\" (UID: \"92c17f31-3648-405d-9523-6426c6085614\") " Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.308905 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities" (OuterVolumeSpecName: "utilities") pod "92c17f31-3648-405d-9523-6426c6085614" (UID: "92c17f31-3648-405d-9523-6426c6085614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.309475 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.321002 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz" (OuterVolumeSpecName: "kube-api-access-dzwqz") pod "92c17f31-3648-405d-9523-6426c6085614" (UID: "92c17f31-3648-405d-9523-6426c6085614"). InnerVolumeSpecName "kube-api-access-dzwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.335185 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92c17f31-3648-405d-9523-6426c6085614" (UID: "92c17f31-3648-405d-9523-6426c6085614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.411889 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c17f31-3648-405d-9523-6426c6085614-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.411937 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzwqz\" (UniqueName: \"kubernetes.io/projected/92c17f31-3648-405d-9523-6426c6085614-kube-api-access-dzwqz\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.680749 4698 generic.go:334] "Generic (PLEG): container finished" podID="92c17f31-3648-405d-9523-6426c6085614" containerID="b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8" exitCode=0 Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.680837 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgrtw" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.680837 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerDied","Data":"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8"} Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.680909 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgrtw" event={"ID":"92c17f31-3648-405d-9523-6426c6085614","Type":"ContainerDied","Data":"afb2f591701416b7085a0a4bea0fad1c3ff284237661ad352c70bfdb25382cac"} Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.680933 4698 scope.go:117] "RemoveContainer" containerID="b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.703293 4698 scope.go:117] "RemoveContainer" containerID="20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.721502 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.727795 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgrtw"] Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.736624 4698 scope.go:117] "RemoveContainer" containerID="e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f" Oct 14 11:15:15 crc kubenswrapper[4698]: E1014 11:15:15.768364 4698 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c17f31_3648_405d_9523_6426c6085614.slice/crio-afb2f591701416b7085a0a4bea0fad1c3ff284237661ad352c70bfdb25382cac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c17f31_3648_405d_9523_6426c6085614.slice\": RecentStats: unable to find data in memory cache]" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.790714 4698 scope.go:117] "RemoveContainer" containerID="b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8" Oct 14 11:15:15 crc kubenswrapper[4698]: E1014 11:15:15.794958 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8\": container with ID starting with b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8 not found: ID does not exist" containerID="b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.795022 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8"} err="failed to get container status \"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8\": rpc error: code = NotFound desc = could not find container \"b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8\": container with ID starting with b9ff6bdc3be8f66beaf3772f9ab3e06af8ea499c8c52d12f7f433f4484fa5ee8 not found: ID does not exist" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.795057 4698 scope.go:117] "RemoveContainer" containerID="20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77" Oct 14 11:15:15 crc kubenswrapper[4698]: E1014 11:15:15.798934 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77\": container with ID starting with 20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77 not found: ID does not exist" containerID="20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.798983 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77"} err="failed to get container status \"20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77\": rpc error: code = NotFound desc = could not find container \"20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77\": container with ID starting with 20aae114f469efeb1b29a11248687ab6ef80496ce0f13c29de1142152d944b77 not found: ID does not exist" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.799015 4698 scope.go:117] "RemoveContainer" containerID="e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f" Oct 14 11:15:15 crc kubenswrapper[4698]: E1014 11:15:15.799867 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f\": container with ID starting with e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f not found: ID does not exist" containerID="e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f" Oct 14 11:15:15 crc kubenswrapper[4698]: I1014 11:15:15.799896 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f"} err="failed to get container status \"e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f\": rpc error: code = NotFound desc = could not find container \"e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f\": container with ID starting with e0aff07893cfd706ed3bb82f9b19a80360d6a2cbe44da42beb0e02200ded6c3f not found: ID does not exist" Oct 14 11:15:17 crc kubenswrapper[4698]: I1014 11:15:17.027857 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c17f31-3648-405d-9523-6426c6085614" path="/var/lib/kubelet/pods/92c17f31-3648-405d-9523-6426c6085614/volumes" Oct 14 11:15:19 crc kubenswrapper[4698]: I1014 11:15:19.335051 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:19 crc kubenswrapper[4698]: I1014 11:15:19.386199 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:19 crc kubenswrapper[4698]: I1014 11:15:19.569897 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:15:20 crc kubenswrapper[4698]: I1014 11:15:20.731650 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-km97b" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="registry-server" containerID="cri-o://a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3" gracePeriod=2 Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.233661 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.269311 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7kkx\" (UniqueName: \"kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx\") pod \"d21df7c0-007a-4fa7-bfd9-729c77df235f\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.277560 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx" (OuterVolumeSpecName: "kube-api-access-l7kkx") pod "d21df7c0-007a-4fa7-bfd9-729c77df235f" (UID: "d21df7c0-007a-4fa7-bfd9-729c77df235f"). InnerVolumeSpecName "kube-api-access-l7kkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.370886 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities\") pod \"d21df7c0-007a-4fa7-bfd9-729c77df235f\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.371323 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content\") pod \"d21df7c0-007a-4fa7-bfd9-729c77df235f\" (UID: \"d21df7c0-007a-4fa7-bfd9-729c77df235f\") " Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.371717 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities" (OuterVolumeSpecName: "utilities") pod "d21df7c0-007a-4fa7-bfd9-729c77df235f" (UID: "d21df7c0-007a-4fa7-bfd9-729c77df235f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.372189 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7kkx\" (UniqueName: \"kubernetes.io/projected/d21df7c0-007a-4fa7-bfd9-729c77df235f-kube-api-access-l7kkx\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.372278 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.459238 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d21df7c0-007a-4fa7-bfd9-729c77df235f" (UID: "d21df7c0-007a-4fa7-bfd9-729c77df235f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.474882 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21df7c0-007a-4fa7-bfd9-729c77df235f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.747512 4698 generic.go:334] "Generic (PLEG): container finished" podID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerID="a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3" exitCode=0 Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.747596 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-km97b" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.747647 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerDied","Data":"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3"} Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.748216 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-km97b" event={"ID":"d21df7c0-007a-4fa7-bfd9-729c77df235f","Type":"ContainerDied","Data":"c96033c69f9011a17db53fdfd959674be73a938104a2a6d95b9ebf3736f84131"} Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.748261 4698 scope.go:117] "RemoveContainer" containerID="a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.787599 4698 scope.go:117] "RemoveContainer" containerID="aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.801370 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.817401 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-km97b"] Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.834284 4698 scope.go:117] "RemoveContainer" containerID="03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.869747 4698 scope.go:117] "RemoveContainer" containerID="a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3" Oct 14 11:15:21 crc kubenswrapper[4698]: E1014 11:15:21.870110 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3\": container with ID starting with a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3 not found: ID does not exist" containerID="a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.870158 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3"} err="failed to get container status \"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3\": rpc error: code = NotFound desc = could not find container \"a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3\": container with ID starting with a542f2081fd2339ee4f7e548871f5ea5b481e9aba13f9a318f985452b10fa3c3 not found: ID does not exist" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.870188 4698 scope.go:117] "RemoveContainer" containerID="aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52" Oct 14 11:15:21 crc kubenswrapper[4698]: E1014 11:15:21.871430 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52\": container with ID starting with aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52 not found: ID does not exist" containerID="aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.871482 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52"} err="failed to get container status \"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52\": rpc error: code = NotFound desc = could not find container \"aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52\": container with ID starting with aced57280f1fbd6feef02422dd733917432cd088d5f2b6acc1de9bbfb5cdba52 not found: ID does not exist" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.871519 4698 scope.go:117] "RemoveContainer" containerID="03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63" Oct 14 11:15:21 crc kubenswrapper[4698]: E1014 11:15:21.874113 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63\": container with ID starting with 03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63 not found: ID does not exist" containerID="03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63" Oct 14 11:15:21 crc kubenswrapper[4698]: I1014 11:15:21.874142 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63"} err="failed to get container status \"03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63\": rpc error: code = NotFound desc = could not find container \"03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63\": container with ID starting with 03a1dd0b7629e14298d5dbb5e996ee1b8e947a1c6533f33a800faf5dc220bf63 not found: ID does not exist" Oct 14 11:15:23 crc kubenswrapper[4698]: I1014 11:15:23.019012 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:15:23 crc kubenswrapper[4698]: E1014 11:15:23.021269 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:15:23 crc kubenswrapper[4698]: I1014 11:15:23.039290 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" path="/var/lib/kubelet/pods/d21df7c0-007a-4fa7-bfd9-729c77df235f/volumes" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.112323 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.292252 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.309536 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.355778 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.477613 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.488229 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.514427 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/extract/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.678941 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-5wlw6_24d6e9c5-aad5-4856-a7b7-20e04553c864/kube-rbac-proxy/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.786472 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-5wlw6_24d6e9c5-aad5-4856-a7b7-20e04553c864/manager/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.798679 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nh6c5_f91fec87-379e-4c52-9d03-b56841232184/kube-rbac-proxy/0.log" Oct 14 11:15:31 crc kubenswrapper[4698]: I1014 11:15:31.914930 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nh6c5_f91fec87-379e-4c52-9d03-b56841232184/manager/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.025481 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-nh4nc_6d1a4e09-e83d-4634-ae32-b37666d65f61/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.096490 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-nh4nc_6d1a4e09-e83d-4634-ae32-b37666d65f61/manager/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.181744 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-f6jr7_b310d6c3-527e-4a58-bc98-edcd7731b9e3/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.368069 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-f6jr7_b310d6c3-527e-4a58-bc98-edcd7731b9e3/manager/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.399983 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-52h2t_004d5489-901d-4fd3-9fc3-ae0016255950/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.405621 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-52h2t_004d5489-901d-4fd3-9fc3-ae0016255950/manager/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.558904 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-nq8vb_3482400d-0e9f-4dc5-883f-36313dc33944/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.639905 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-nq8vb_3482400d-0e9f-4dc5-883f-36313dc33944/manager/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.842628 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-d9qkb_2547a997-b2ba-4300-92ed-09ccc57499c7/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.890190 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2jvxv_4ea0ebfe-fbe9-428c-baf6-565e4dbb9044/kube-rbac-proxy/0.log" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.980440 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.981318 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.981440 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.981541 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="extract-utilities" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.981615 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="extract-utilities" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.981700 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.981788 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.981900 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" containerName="collect-profiles" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.981973 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" containerName="collect-profiles" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.982059 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="extract-utilities" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.982138 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="extract-utilities" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.982215 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="extract-content" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.982299 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="extract-content" Oct 14 11:15:32 crc kubenswrapper[4698]: E1014 11:15:32.982384 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="extract-content" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.982457 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="extract-content" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.982820 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c17f31-3648-405d-9523-6426c6085614" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.982930 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21df7c0-007a-4fa7-bfd9-729c77df235f" containerName="registry-server" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.983035 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b430f2-7d0c-4c2a-a868-ac266dc5ab2b" containerName="collect-profiles" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.985007 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:32 crc kubenswrapper[4698]: I1014 11:15:32.992555 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.003320 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2jvxv_4ea0ebfe-fbe9-428c-baf6-565e4dbb9044/manager/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.099830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-d9qkb_2547a997-b2ba-4300-92ed-09ccc57499c7/manager/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.105108 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.105213 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.105265 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kb8b\" (UniqueName: \"kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.187549 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-ks5tw_d6a101ad-e350-4964-a786-91072a6776e8/kube-rbac-proxy/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.207135 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kb8b\" (UniqueName: \"kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.207294 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.207366 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.208021 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.208149 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.238613 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kb8b\" (UniqueName: \"kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b\") pod \"certified-operators-jfzdq\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.309363 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.367942 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-ks5tw_d6a101ad-e350-4964-a786-91072a6776e8/manager/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.382953 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-nb7fk_f55ae8f2-2a7c-4158-b125-2121c37fc874/kube-rbac-proxy/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.596923 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-nb7fk_f55ae8f2-2a7c-4158-b125-2121c37fc874/manager/0.log" Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.895065 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:33 crc kubenswrapper[4698]: I1014 11:15:33.962330 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-nds58_cca3d0fd-d9aa-428f-95f2-14238b7cf627/kube-rbac-proxy/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.008253 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-nds58_cca3d0fd-d9aa-428f-95f2-14238b7cf627/manager/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.239848 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-kmzfd_0660342f-b230-41a7-a2f8-44cd75696095/manager/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.275819 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-kmzfd_0660342f-b230-41a7-a2f8-44cd75696095/kube-rbac-proxy/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.396453 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-c49pm_84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7/kube-rbac-proxy/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.547615 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-zfmdv_442ecb91-0479-42a8-94ba-5be7d8cea79f/kube-rbac-proxy/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.631164 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-c49pm_84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7/manager/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.639504 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-zfmdv_442ecb91-0479-42a8-94ba-5be7d8cea79f/manager/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.798212 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc_2486fbf6-b25f-4bc3-932d-5ade782da654/kube-rbac-proxy/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.841853 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc_2486fbf6-b25f-4bc3-932d-5ade782da654/manager/0.log" Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.883212 4698 generic.go:334] "Generic (PLEG): container finished" podID="09cd3202-2758-4387-8689-1359ed419683" containerID="4e3b08bf681a32e7ff335b9e18e41600f81d5d2468c654195b825cad612c6824" exitCode=0 Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.883265 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerDied","Data":"4e3b08bf681a32e7ff335b9e18e41600f81d5d2468c654195b825cad612c6824"} Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.883297 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerStarted","Data":"3c2ca3bd5d9c7549fb4720e2ea7af1527915d50dca357d29d8fdd1e571c0cd9c"} Oct 14 11:15:34 crc kubenswrapper[4698]: I1014 11:15:34.972662 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768cc76f8b-7jr79_ecf62cd7-15b2-4bcc-aadd-1c982c7149e7/kube-rbac-proxy/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.029989 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7fc68b75ff-gn564_8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1/kube-rbac-proxy/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.307658 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7fc68b75ff-gn564_8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1/operator/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.321292 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2wch9_9e25898b-e095-4f25-be09-70befbd919b5/registry-server/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.505258 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-xpv9w_3e3e37b3-e0ed-479a-9124-aa6c814a1030/kube-rbac-proxy/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.612180 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-wdrpw_4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a/kube-rbac-proxy/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.638921 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-xpv9w_3e3e37b3-e0ed-479a-9124-aa6c814a1030/manager/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.804798 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-wdrpw_4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a/manager/0.log" Oct 14 11:15:35 crc kubenswrapper[4698]: I1014 11:15:35.966118 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-zctmt_052a38cb-bdfa-46de-ab53-e81b2f014b1d/operator/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.141950 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-7p5xj_28b97988-e327-4c7a-aab5-5985bf4a675d/kube-rbac-proxy/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.207204 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-7p5xj_28b97988-e327-4c7a-aab5-5985bf4a675d/manager/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.277105 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xbvhq_e93508a8-6ee5-4950-8cea-7c3599b7e1ec/kube-rbac-proxy/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.451176 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768cc76f8b-7jr79_ecf62cd7-15b2-4bcc-aadd-1c982c7149e7/manager/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.471986 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xbvhq_e93508a8-6ee5-4950-8cea-7c3599b7e1ec/manager/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.556701 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-g5fmn_a969812c-8490-4e43-ab00-73c8254c5b21/manager/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.561846 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-g5fmn_a969812c-8490-4e43-ab00-73c8254c5b21/kube-rbac-proxy/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.670738 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-n4xkf_5b887a4b-1049-4b80-8613-89ef2f446df4/kube-rbac-proxy/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.714365 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-n4xkf_5b887a4b-1049-4b80-8613-89ef2f446df4/manager/0.log" Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.906354 4698 generic.go:334] "Generic (PLEG): container finished" podID="09cd3202-2758-4387-8689-1359ed419683" containerID="d9b78079791e335e7d89d32e75186ca3960cd6895ea49455e1e115c6e497dff0" exitCode=0 Oct 14 11:15:36 crc kubenswrapper[4698]: I1014 11:15:36.906432 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerDied","Data":"d9b78079791e335e7d89d32e75186ca3960cd6895ea49455e1e115c6e497dff0"} Oct 14 11:15:37 crc kubenswrapper[4698]: I1014 11:15:37.019214 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:15:37 crc kubenswrapper[4698]: E1014 11:15:37.019471 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:15:38 crc kubenswrapper[4698]: I1014 11:15:38.927889 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerStarted","Data":"00c76adb1fe1d30095821f99573ff16df94550361d87cb60ccb8fef3cb1f179c"} Oct 14 11:15:38 crc kubenswrapper[4698]: I1014 11:15:38.957167 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfzdq" podStartSLOduration=4.085649435 podStartE2EDuration="6.957146357s" podCreationTimestamp="2025-10-14 11:15:32 +0000 UTC" firstStartedPulling="2025-10-14 11:15:34.885386292 +0000 UTC m=+4716.582685708" lastFinishedPulling="2025-10-14 11:15:37.756883214 +0000 UTC m=+4719.454182630" observedRunningTime="2025-10-14 11:15:38.951157253 +0000 UTC m=+4720.648456679" watchObservedRunningTime="2025-10-14 11:15:38.957146357 +0000 UTC m=+4720.654445773" Oct 14 11:15:43 crc kubenswrapper[4698]: I1014 11:15:43.310072 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:43 crc kubenswrapper[4698]: I1014 11:15:43.310968 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:43 crc kubenswrapper[4698]: I1014 11:15:43.384900 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:44 crc kubenswrapper[4698]: I1014 11:15:44.659564 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:44 crc kubenswrapper[4698]: I1014 11:15:44.721346 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:45 crc kubenswrapper[4698]: I1014 11:15:45.992991 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfzdq" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="registry-server" containerID="cri-o://00c76adb1fe1d30095821f99573ff16df94550361d87cb60ccb8fef3cb1f179c" gracePeriod=2 Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.003292 4698 generic.go:334] "Generic (PLEG): container finished" podID="09cd3202-2758-4387-8689-1359ed419683" containerID="00c76adb1fe1d30095821f99573ff16df94550361d87cb60ccb8fef3cb1f179c" exitCode=0 Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.003388 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerDied","Data":"00c76adb1fe1d30095821f99573ff16df94550361d87cb60ccb8fef3cb1f179c"} Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.289188 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.376889 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kb8b\" (UniqueName: \"kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b\") pod \"09cd3202-2758-4387-8689-1359ed419683\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.376929 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities\") pod \"09cd3202-2758-4387-8689-1359ed419683\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.376967 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content\") pod \"09cd3202-2758-4387-8689-1359ed419683\" (UID: \"09cd3202-2758-4387-8689-1359ed419683\") " Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.378377 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities" (OuterVolumeSpecName: "utilities") pod "09cd3202-2758-4387-8689-1359ed419683" (UID: "09cd3202-2758-4387-8689-1359ed419683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.396921 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b" (OuterVolumeSpecName: "kube-api-access-7kb8b") pod "09cd3202-2758-4387-8689-1359ed419683" (UID: "09cd3202-2758-4387-8689-1359ed419683"). InnerVolumeSpecName "kube-api-access-7kb8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.465536 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09cd3202-2758-4387-8689-1359ed419683" (UID: "09cd3202-2758-4387-8689-1359ed419683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.478805 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kb8b\" (UniqueName: \"kubernetes.io/projected/09cd3202-2758-4387-8689-1359ed419683-kube-api-access-7kb8b\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.478835 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:47 crc kubenswrapper[4698]: I1014 11:15:47.478845 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09cd3202-2758-4387-8689-1359ed419683-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.017882 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfzdq" event={"ID":"09cd3202-2758-4387-8689-1359ed419683","Type":"ContainerDied","Data":"3c2ca3bd5d9c7549fb4720e2ea7af1527915d50dca357d29d8fdd1e571c0cd9c"} Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.018204 4698 scope.go:117] "RemoveContainer" containerID="00c76adb1fe1d30095821f99573ff16df94550361d87cb60ccb8fef3cb1f179c" Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.018421 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfzdq" Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.057787 4698 scope.go:117] "RemoveContainer" containerID="d9b78079791e335e7d89d32e75186ca3960cd6895ea49455e1e115c6e497dff0" Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.075013 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.096932 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfzdq"] Oct 14 11:15:48 crc kubenswrapper[4698]: I1014 11:15:48.101448 4698 scope.go:117] "RemoveContainer" containerID="4e3b08bf681a32e7ff335b9e18e41600f81d5d2468c654195b825cad612c6824" Oct 14 11:15:49 crc kubenswrapper[4698]: I1014 11:15:49.038392 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cd3202-2758-4387-8689-1359ed419683" path="/var/lib/kubelet/pods/09cd3202-2758-4387-8689-1359ed419683/volumes" Oct 14 11:15:50 crc kubenswrapper[4698]: I1014 11:15:50.018325 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:15:50 crc kubenswrapper[4698]: E1014 11:15:50.019198 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:15:55 crc kubenswrapper[4698]: I1014 11:15:55.658453 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5k2p9_c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7/control-plane-machine-set-operator/0.log" Oct 14 11:15:55 crc kubenswrapper[4698]: I1014 11:15:55.831605 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmxzf_77041b5d-f53d-425c-b824-a61833af677c/kube-rbac-proxy/0.log" Oct 14 11:15:55 crc kubenswrapper[4698]: I1014 11:15:55.883754 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmxzf_77041b5d-f53d-425c-b824-a61833af677c/machine-api-operator/0.log" Oct 14 11:16:01 crc kubenswrapper[4698]: I1014 11:16:01.017091 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:16:01 crc kubenswrapper[4698]: E1014 11:16:01.017838 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:16:08 crc kubenswrapper[4698]: I1014 11:16:08.091847 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6hh8m_4e2060ed-feb8-4937-a34d-58686e380b4b/cert-manager-controller/0.log" Oct 14 11:16:08 crc kubenswrapper[4698]: I1014 11:16:08.242432 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-f44qv_c425683e-dad1-4ebb-8992-8a979383addb/cert-manager-cainjector/0.log" Oct 14 11:16:08 crc kubenswrapper[4698]: I1014 11:16:08.305572 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-92ws8_1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c/cert-manager-webhook/0.log" Oct 14 11:16:16 crc kubenswrapper[4698]: I1014 11:16:16.017045 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:16:16 crc kubenswrapper[4698]: E1014 11:16:16.017999 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:16:22 crc kubenswrapper[4698]: I1014 11:16:22.456076 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-5zqgv_afaa96d5-b448-47b4-ac36-b8d4d232441b/nmstate-console-plugin/0.log" Oct 14 11:16:22 crc kubenswrapper[4698]: I1014 11:16:22.651733 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tb9f6_359df233-fb7a-4d84-888a-d6fa99ed8b55/nmstate-handler/0.log" Oct 14 11:16:22 crc kubenswrapper[4698]: I1014 11:16:22.735147 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-nq56c_a1d21132-5dfd-4813-9c39-d4be39666a38/kube-rbac-proxy/0.log" Oct 14 11:16:22 crc kubenswrapper[4698]: I1014 11:16:22.771320 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-nq56c_a1d21132-5dfd-4813-9c39-d4be39666a38/nmstate-metrics/0.log" Oct 14 11:16:23 crc kubenswrapper[4698]: I1014 11:16:23.004555 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-669hf_fd11f615-dce1-42f4-8470-d1117fe3305b/nmstate-operator/0.log" Oct 14 11:16:23 crc kubenswrapper[4698]: I1014 11:16:23.011450 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-8vhjp_3da8a241-b3ad-480d-aff7-f571b43fb673/nmstate-webhook/0.log" Oct 14 11:16:30 crc kubenswrapper[4698]: I1014 11:16:30.017701 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:16:30 crc kubenswrapper[4698]: E1014 11:16:30.018874 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:16:38 crc kubenswrapper[4698]: I1014 11:16:38.592160 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-4rhbc_e165bb03-3546-4ae5-8c3c-5605cae81371/kube-rbac-proxy/0.log" Oct 14 11:16:38 crc kubenswrapper[4698]: I1014 11:16:38.661650 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-4rhbc_e165bb03-3546-4ae5-8c3c-5605cae81371/controller/0.log" Oct 14 11:16:38 crc kubenswrapper[4698]: I1014 11:16:38.790860 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-wqqdm_db7dd36b-e7d3-4eed-b55f-cc3316be8e85/frr-k8s-webhook-server/0.log" Oct 14 11:16:38 crc kubenswrapper[4698]: I1014 11:16:38.860018 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.038849 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.058210 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.075739 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.083906 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.312220 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.326183 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.330000 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.350868 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.545342 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.552655 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.559503 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/controller/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.592155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.729707 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/frr-metrics/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.753401 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/kube-rbac-proxy/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.773659 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/kube-rbac-proxy-frr/0.log" Oct 14 11:16:39 crc kubenswrapper[4698]: I1014 11:16:39.927180 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/reloader/0.log" Oct 14 11:16:40 crc kubenswrapper[4698]: I1014 11:16:40.029915 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7556747f48-jxr6w_7f8afa35-0e83-439b-80cb-31f3da9293de/manager/0.log" Oct 14 11:16:40 crc kubenswrapper[4698]: I1014 11:16:40.341006 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cd79cbbb8-dcbnr_2cc70ba0-d097-4987-b877-fc209e27f275/webhook-server/0.log" Oct 14 11:16:40 crc kubenswrapper[4698]: I1014 11:16:40.482786 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-847mc_7ab10af0-2cb8-4ff4-bb4c-a186a319ce37/kube-rbac-proxy/0.log" Oct 14 11:16:40 crc kubenswrapper[4698]: I1014 11:16:40.988216 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-847mc_7ab10af0-2cb8-4ff4-bb4c-a186a319ce37/speaker/0.log" Oct 14 11:16:41 crc kubenswrapper[4698]: I1014 11:16:41.016949 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:16:41 crc kubenswrapper[4698]: E1014 11:16:41.017238 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:16:41 crc kubenswrapper[4698]: I1014 11:16:41.538438 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/frr/0.log" Oct 14 11:16:53 crc kubenswrapper[4698]: I1014 11:16:53.752675 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:16:53 crc kubenswrapper[4698]: I1014 11:16:53.991112 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:16:53 crc kubenswrapper[4698]: I1014 11:16:53.994898 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:16:53 crc kubenswrapper[4698]: I1014 11:16:53.998332 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.179995 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.197252 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.224704 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/extract/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.392383 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.627579 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.627853 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.646913 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.824218 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:16:54 crc kubenswrapper[4698]: I1014 11:16:54.842866 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.016668 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:16:55 crc kubenswrapper[4698]: E1014 11:16:55.017380 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.036395 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.285572 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.392591 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.396420 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.575848 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.584387 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/registry-server/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.697875 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.871434 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:16:55 crc kubenswrapper[4698]: I1014 11:16:55.985885 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/registry-server/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.129420 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.143526 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.146065 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.345666 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/extract/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.354547 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.360505 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.578151 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n6rkb_717ff5f8-f2f0-46ca-86e2-dba0533d1f69/marketplace-operator/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.624814 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.854062 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.855662 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:16:56 crc kubenswrapper[4698]: I1014 11:16:56.856011 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.038980 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.063096 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.250874 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.262998 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/registry-server/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.421671 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.443535 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.449720 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.612596 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:16:57 crc kubenswrapper[4698]: I1014 11:16:57.625082 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:16:58 crc kubenswrapper[4698]: I1014 11:16:58.171983 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/registry-server/0.log" Oct 14 11:17:07 crc kubenswrapper[4698]: I1014 11:17:07.017824 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:17:07 crc kubenswrapper[4698]: E1014 11:17:07.018786 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:17:18 crc kubenswrapper[4698]: I1014 11:17:18.017804 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:17:18 crc kubenswrapper[4698]: E1014 11:17:18.018681 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:17:33 crc kubenswrapper[4698]: I1014 11:17:33.019198 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:17:33 crc kubenswrapper[4698]: E1014 11:17:33.020034 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:17:45 crc kubenswrapper[4698]: I1014 11:17:45.018517 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:17:45 crc kubenswrapper[4698]: E1014 11:17:45.021041 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:18:00 crc kubenswrapper[4698]: I1014 11:18:00.017270 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:18:01 crc kubenswrapper[4698]: I1014 11:18:01.319917 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5"} Oct 14 11:19:07 crc kubenswrapper[4698]: I1014 11:19:07.982535 4698 generic.go:334] "Generic (PLEG): container finished" podID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerID="3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4" exitCode=0 Oct 14 11:19:07 crc kubenswrapper[4698]: I1014 11:19:07.982635 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" event={"ID":"bdc46bca-9ee2-4b01-8713-11880ff4360a","Type":"ContainerDied","Data":"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4"} Oct 14 11:19:07 crc kubenswrapper[4698]: I1014 11:19:07.983798 4698 scope.go:117] "RemoveContainer" containerID="3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4" Oct 14 11:19:08 crc kubenswrapper[4698]: I1014 11:19:08.434488 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5xv_must-gather-g4wn2_bdc46bca-9ee2-4b01-8713-11880ff4360a/gather/0.log" Oct 14 11:19:10 crc kubenswrapper[4698]: E1014 11:19:10.850033 4698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:42444->38.102.83.188:44569: write tcp 38.102.83.188:42444->38.102.83.188:44569: write: broken pipe Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.178510 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5xv/must-gather-g4wn2"] Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.179368 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="copy" containerID="cri-o://2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48" gracePeriod=2 Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.189120 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5xv/must-gather-g4wn2"] Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.729596 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5xv_must-gather-g4wn2_bdc46bca-9ee2-4b01-8713-11880ff4360a/copy/0.log" Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.730628 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.877901 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") pod \"bdc46bca-9ee2-4b01-8713-11880ff4360a\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.878102 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q4cm\" (UniqueName: \"kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm\") pod \"bdc46bca-9ee2-4b01-8713-11880ff4360a\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.883234 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm" (OuterVolumeSpecName: "kube-api-access-5q4cm") pod "bdc46bca-9ee2-4b01-8713-11880ff4360a" (UID: "bdc46bca-9ee2-4b01-8713-11880ff4360a"). InnerVolumeSpecName "kube-api-access-5q4cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:19:16 crc kubenswrapper[4698]: I1014 11:19:16.982232 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q4cm\" (UniqueName: \"kubernetes.io/projected/bdc46bca-9ee2-4b01-8713-11880ff4360a-kube-api-access-5q4cm\") on node \"crc\" DevicePath \"\"" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.083054 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bdc46bca-9ee2-4b01-8713-11880ff4360a" (UID: "bdc46bca-9ee2-4b01-8713-11880ff4360a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.083308 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") pod \"bdc46bca-9ee2-4b01-8713-11880ff4360a\" (UID: \"bdc46bca-9ee2-4b01-8713-11880ff4360a\") " Oct 14 11:19:17 crc kubenswrapper[4698]: W1014 11:19:17.084586 4698 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bdc46bca-9ee2-4b01-8713-11880ff4360a/volumes/kubernetes.io~empty-dir/must-gather-output Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.084709 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bdc46bca-9ee2-4b01-8713-11880ff4360a" (UID: "bdc46bca-9ee2-4b01-8713-11880ff4360a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.088697 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5xv_must-gather-g4wn2_bdc46bca-9ee2-4b01-8713-11880ff4360a/copy/0.log" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.089492 4698 generic.go:334] "Generic (PLEG): container finished" podID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerID="2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48" exitCode=143 Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.089547 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5xv/must-gather-g4wn2" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.089592 4698 scope.go:117] "RemoveContainer" containerID="2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.110533 4698 scope.go:117] "RemoveContainer" containerID="3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.186722 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bdc46bca-9ee2-4b01-8713-11880ff4360a-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.215745 4698 scope.go:117] "RemoveContainer" containerID="2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48" Oct 14 11:19:17 crc kubenswrapper[4698]: E1014 11:19:17.216463 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48\": container with ID starting with 2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48 not found: ID does not exist" containerID="2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.216518 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48"} err="failed to get container status \"2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48\": rpc error: code = NotFound desc = could not find container \"2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48\": container with ID starting with 2d227799eafa178bcb879fa6e6c766234aab6508dbee488db4a4c17cccb84e48 not found: ID does not exist" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.216547 4698 scope.go:117] "RemoveContainer" containerID="3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4" Oct 14 11:19:17 crc kubenswrapper[4698]: E1014 11:19:17.218060 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4\": container with ID starting with 3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4 not found: ID does not exist" containerID="3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4" Oct 14 11:19:17 crc kubenswrapper[4698]: I1014 11:19:17.218109 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4"} err="failed to get container status \"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4\": rpc error: code = NotFound desc = could not find container \"3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4\": container with ID starting with 3749c4b445dcb81b5381429a225023933d3e42ce65c84cebeb2aef94061438f4 not found: ID does not exist" Oct 14 11:19:19 crc kubenswrapper[4698]: I1014 11:19:19.028697 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" path="/var/lib/kubelet/pods/bdc46bca-9ee2-4b01-8713-11880ff4360a/volumes" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.118876 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khv2b/must-gather-cmb56"] Oct 14 11:19:44 crc kubenswrapper[4698]: E1014 11:19:44.119891 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="gather" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.119906 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="gather" Oct 14 11:19:44 crc kubenswrapper[4698]: E1014 11:19:44.119920 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="extract-utilities" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.119926 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="extract-utilities" Oct 14 11:19:44 crc kubenswrapper[4698]: E1014 11:19:44.119939 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="extract-content" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.119945 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="extract-content" Oct 14 11:19:44 crc kubenswrapper[4698]: E1014 11:19:44.119977 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="registry-server" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.119982 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="registry-server" Oct 14 11:19:44 crc kubenswrapper[4698]: E1014 11:19:44.120005 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="copy" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.120010 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="copy" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.120191 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="copy" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.120205 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cd3202-2758-4387-8689-1359ed419683" containerName="registry-server" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.120218 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc46bca-9ee2-4b01-8713-11880ff4360a" containerName="gather" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.121275 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.124966 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-khv2b"/"kube-root-ca.crt" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.127391 4698 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-khv2b"/"default-dockercfg-qkz8x" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.127911 4698 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-khv2b"/"openshift-service-ca.crt" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.135488 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khv2b/must-gather-cmb56"] Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.181195 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.181301 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhdh\" (UniqueName: \"kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.281845 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.281911 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhdh\" (UniqueName: \"kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.282426 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.305143 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhdh\" (UniqueName: \"kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh\") pod \"must-gather-cmb56\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:44 crc kubenswrapper[4698]: I1014 11:19:44.440790 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:19:45 crc kubenswrapper[4698]: I1014 11:19:45.104642 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khv2b/must-gather-cmb56"] Oct 14 11:19:45 crc kubenswrapper[4698]: I1014 11:19:45.365698 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/must-gather-cmb56" event={"ID":"706c5c2d-72d9-4322-8c88-32220831a907","Type":"ContainerStarted","Data":"0ba04d1035d084427e7e7171604326eaa69795c95ee8b199ca7c17385e5d7f45"} Oct 14 11:19:46 crc kubenswrapper[4698]: I1014 11:19:46.378554 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/must-gather-cmb56" event={"ID":"706c5c2d-72d9-4322-8c88-32220831a907","Type":"ContainerStarted","Data":"90199de37cfc4c2debca19a68ecfc9666289d48e1dd3d34b3bb5a629c9e5b041"} Oct 14 11:19:46 crc kubenswrapper[4698]: I1014 11:19:46.378904 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/must-gather-cmb56" event={"ID":"706c5c2d-72d9-4322-8c88-32220831a907","Type":"ContainerStarted","Data":"a568975c21e3cfaf4770ca57d2dd08587c04d57bedb618333ad8ed302df228e1"} Oct 14 11:19:46 crc kubenswrapper[4698]: I1014 11:19:46.396258 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-khv2b/must-gather-cmb56" podStartSLOduration=2.396238593 podStartE2EDuration="2.396238593s" podCreationTimestamp="2025-10-14 11:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 11:19:46.393886955 +0000 UTC m=+4968.091186391" watchObservedRunningTime="2025-10-14 11:19:46.396238593 +0000 UTC m=+4968.093538009" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.322349 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khv2b/crc-debug-447bn"] Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.324984 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.428296 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.428398 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgk9w\" (UniqueName: \"kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.530037 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgk9w\" (UniqueName: \"kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.530561 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.530659 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.552478 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgk9w\" (UniqueName: \"kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w\") pod \"crc-debug-447bn\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:50 crc kubenswrapper[4698]: I1014 11:19:50.649503 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:19:51 crc kubenswrapper[4698]: I1014 11:19:51.431344 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-447bn" event={"ID":"bebd72ab-df47-4a42-8efd-ea2b044f1733","Type":"ContainerStarted","Data":"9580b797df9f1e093d88ceb4d5d3c4d736a91da9c05ad30ee371a2f0c272bc99"} Oct 14 11:19:51 crc kubenswrapper[4698]: I1014 11:19:51.432160 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-447bn" event={"ID":"bebd72ab-df47-4a42-8efd-ea2b044f1733","Type":"ContainerStarted","Data":"5132d8a784c78a00836e8fd00e61c527c85383eb4f9d062d33073094930916cd"} Oct 14 11:19:51 crc kubenswrapper[4698]: I1014 11:19:51.458201 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-khv2b/crc-debug-447bn" podStartSLOduration=1.458175946 podStartE2EDuration="1.458175946s" podCreationTimestamp="2025-10-14 11:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 11:19:51.451950415 +0000 UTC m=+4973.149249851" watchObservedRunningTime="2025-10-14 11:19:51.458175946 +0000 UTC m=+4973.155475362" Oct 14 11:20:23 crc kubenswrapper[4698]: I1014 11:20:23.908470 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:20:23 crc kubenswrapper[4698]: I1014 11:20:23.909956 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:20:29 crc kubenswrapper[4698]: I1014 11:20:29.844631 4698 generic.go:334] "Generic (PLEG): container finished" podID="bebd72ab-df47-4a42-8efd-ea2b044f1733" containerID="9580b797df9f1e093d88ceb4d5d3c4d736a91da9c05ad30ee371a2f0c272bc99" exitCode=0 Oct 14 11:20:29 crc kubenswrapper[4698]: I1014 11:20:29.844715 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-447bn" event={"ID":"bebd72ab-df47-4a42-8efd-ea2b044f1733","Type":"ContainerDied","Data":"9580b797df9f1e093d88ceb4d5d3c4d736a91da9c05ad30ee371a2f0c272bc99"} Oct 14 11:20:30 crc kubenswrapper[4698]: I1014 11:20:30.966269 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.005590 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-447bn"] Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.015558 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-447bn"] Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.147118 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgk9w\" (UniqueName: \"kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w\") pod \"bebd72ab-df47-4a42-8efd-ea2b044f1733\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.147362 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host\") pod \"bebd72ab-df47-4a42-8efd-ea2b044f1733\" (UID: \"bebd72ab-df47-4a42-8efd-ea2b044f1733\") " Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.147538 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host" (OuterVolumeSpecName: "host") pod "bebd72ab-df47-4a42-8efd-ea2b044f1733" (UID: "bebd72ab-df47-4a42-8efd-ea2b044f1733"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.148713 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bebd72ab-df47-4a42-8efd-ea2b044f1733-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.157184 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w" (OuterVolumeSpecName: "kube-api-access-rgk9w") pod "bebd72ab-df47-4a42-8efd-ea2b044f1733" (UID: "bebd72ab-df47-4a42-8efd-ea2b044f1733"). InnerVolumeSpecName "kube-api-access-rgk9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.250269 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgk9w\" (UniqueName: \"kubernetes.io/projected/bebd72ab-df47-4a42-8efd-ea2b044f1733-kube-api-access-rgk9w\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.872206 4698 scope.go:117] "RemoveContainer" containerID="9580b797df9f1e093d88ceb4d5d3c4d736a91da9c05ad30ee371a2f0c272bc99" Oct 14 11:20:31 crc kubenswrapper[4698]: I1014 11:20:31.872290 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-447bn" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.219918 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khv2b/crc-debug-c9456"] Oct 14 11:20:32 crc kubenswrapper[4698]: E1014 11:20:32.220346 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd72ab-df47-4a42-8efd-ea2b044f1733" containerName="container-00" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.220359 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd72ab-df47-4a42-8efd-ea2b044f1733" containerName="container-00" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.220541 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd72ab-df47-4a42-8efd-ea2b044f1733" containerName="container-00" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.221216 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.373247 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.373330 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvlm\" (UniqueName: \"kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.474842 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.474929 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wvlm\" (UniqueName: \"kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.474994 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.494323 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wvlm\" (UniqueName: \"kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm\") pod \"crc-debug-c9456\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.539819 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:32 crc kubenswrapper[4698]: I1014 11:20:32.896604 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-c9456" event={"ID":"329866ba-bb7d-4d25-bf39-a1373c794a4a","Type":"ContainerStarted","Data":"06ce81e924847de03201213473dcb5d77fb0a68761bb6df0f8ecf53421c96f84"} Oct 14 11:20:33 crc kubenswrapper[4698]: I1014 11:20:33.031312 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd72ab-df47-4a42-8efd-ea2b044f1733" path="/var/lib/kubelet/pods/bebd72ab-df47-4a42-8efd-ea2b044f1733/volumes" Oct 14 11:20:33 crc kubenswrapper[4698]: I1014 11:20:33.906074 4698 generic.go:334] "Generic (PLEG): container finished" podID="329866ba-bb7d-4d25-bf39-a1373c794a4a" containerID="64d2f086ca01369b63a1d88394c431f07248f778847950cbda4778b82a1966df" exitCode=0 Oct 14 11:20:33 crc kubenswrapper[4698]: I1014 11:20:33.906176 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-c9456" event={"ID":"329866ba-bb7d-4d25-bf39-a1373c794a4a","Type":"ContainerDied","Data":"64d2f086ca01369b63a1d88394c431f07248f778847950cbda4778b82a1966df"} Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.029337 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.128864 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host\") pod \"329866ba-bb7d-4d25-bf39-a1373c794a4a\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.129093 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wvlm\" (UniqueName: \"kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm\") pod \"329866ba-bb7d-4d25-bf39-a1373c794a4a\" (UID: \"329866ba-bb7d-4d25-bf39-a1373c794a4a\") " Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.129206 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host" (OuterVolumeSpecName: "host") pod "329866ba-bb7d-4d25-bf39-a1373c794a4a" (UID: "329866ba-bb7d-4d25-bf39-a1373c794a4a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.129603 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/329866ba-bb7d-4d25-bf39-a1373c794a4a-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.160221 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm" (OuterVolumeSpecName: "kube-api-access-6wvlm") pod "329866ba-bb7d-4d25-bf39-a1373c794a4a" (UID: "329866ba-bb7d-4d25-bf39-a1373c794a4a"). InnerVolumeSpecName "kube-api-access-6wvlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.233066 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wvlm\" (UniqueName: \"kubernetes.io/projected/329866ba-bb7d-4d25-bf39-a1373c794a4a-kube-api-access-6wvlm\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.938871 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-c9456" event={"ID":"329866ba-bb7d-4d25-bf39-a1373c794a4a","Type":"ContainerDied","Data":"06ce81e924847de03201213473dcb5d77fb0a68761bb6df0f8ecf53421c96f84"} Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.938924 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ce81e924847de03201213473dcb5d77fb0a68761bb6df0f8ecf53421c96f84" Oct 14 11:20:35 crc kubenswrapper[4698]: I1014 11:20:35.939001 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-c9456" Oct 14 11:20:36 crc kubenswrapper[4698]: I1014 11:20:36.813524 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-c9456"] Oct 14 11:20:36 crc kubenswrapper[4698]: I1014 11:20:36.823793 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-c9456"] Oct 14 11:20:37 crc kubenswrapper[4698]: I1014 11:20:37.028143 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329866ba-bb7d-4d25-bf39-a1373c794a4a" path="/var/lib/kubelet/pods/329866ba-bb7d-4d25-bf39-a1373c794a4a/volumes" Oct 14 11:20:37 crc kubenswrapper[4698]: I1014 11:20:37.995748 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khv2b/crc-debug-d2xjv"] Oct 14 11:20:37 crc kubenswrapper[4698]: E1014 11:20:37.996521 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329866ba-bb7d-4d25-bf39-a1373c794a4a" containerName="container-00" Oct 14 11:20:37 crc kubenswrapper[4698]: I1014 11:20:37.996536 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="329866ba-bb7d-4d25-bf39-a1373c794a4a" containerName="container-00" Oct 14 11:20:37 crc kubenswrapper[4698]: I1014 11:20:37.996720 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="329866ba-bb7d-4d25-bf39-a1373c794a4a" containerName="container-00" Oct 14 11:20:37 crc kubenswrapper[4698]: I1014 11:20:37.997426 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.100174 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.100351 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jt4\" (UniqueName: \"kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.202092 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jt4\" (UniqueName: \"kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.202293 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.202482 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.225585 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jt4\" (UniqueName: \"kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4\") pod \"crc-debug-d2xjv\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.319918 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.966908 4698 generic.go:334] "Generic (PLEG): container finished" podID="c2b0cfbb-37b1-471b-89e5-379b769ca0a0" containerID="beb07ba4d251af4a2d6dd76412c8d6e10cc7c16d4881f8fb4115c1512e9ba7f3" exitCode=0 Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.967014 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" event={"ID":"c2b0cfbb-37b1-471b-89e5-379b769ca0a0","Type":"ContainerDied","Data":"beb07ba4d251af4a2d6dd76412c8d6e10cc7c16d4881f8fb4115c1512e9ba7f3"} Oct 14 11:20:38 crc kubenswrapper[4698]: I1014 11:20:38.967272 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" event={"ID":"c2b0cfbb-37b1-471b-89e5-379b769ca0a0","Type":"ContainerStarted","Data":"a550c23cfec0a37dcb7f8deefe53a718dcbfcbb548dfb4929807ec73d272b11e"} Oct 14 11:20:39 crc kubenswrapper[4698]: I1014 11:20:39.013007 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-d2xjv"] Oct 14 11:20:39 crc kubenswrapper[4698]: I1014 11:20:39.031210 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-khv2b/crc-debug-d2xjv"] Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.380665 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.547918 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2jt4\" (UniqueName: \"kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4\") pod \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.548024 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host\") pod \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\" (UID: \"c2b0cfbb-37b1-471b-89e5-379b769ca0a0\") " Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.548201 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host" (OuterVolumeSpecName: "host") pod "c2b0cfbb-37b1-471b-89e5-379b769ca0a0" (UID: "c2b0cfbb-37b1-471b-89e5-379b769ca0a0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.548630 4698 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-host\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.561522 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4" (OuterVolumeSpecName: "kube-api-access-h2jt4") pod "c2b0cfbb-37b1-471b-89e5-379b769ca0a0" (UID: "c2b0cfbb-37b1-471b-89e5-379b769ca0a0"). InnerVolumeSpecName "kube-api-access-h2jt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.650866 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2jt4\" (UniqueName: \"kubernetes.io/projected/c2b0cfbb-37b1-471b-89e5-379b769ca0a0-kube-api-access-h2jt4\") on node \"crc\" DevicePath \"\"" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.994240 4698 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a550c23cfec0a37dcb7f8deefe53a718dcbfcbb548dfb4929807ec73d272b11e" Oct 14 11:20:40 crc kubenswrapper[4698]: I1014 11:20:40.994350 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/crc-debug-d2xjv" Oct 14 11:20:41 crc kubenswrapper[4698]: I1014 11:20:41.028993 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b0cfbb-37b1-471b-89e5-379b769ca0a0" path="/var/lib/kubelet/pods/c2b0cfbb-37b1-471b-89e5-379b769ca0a0/volumes" Oct 14 11:20:53 crc kubenswrapper[4698]: I1014 11:20:53.907680 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:20:53 crc kubenswrapper[4698]: I1014 11:20:53.910143 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:20:59 crc kubenswrapper[4698]: I1014 11:20:59.411221 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66df6b94fb-sw6kf_35f476fe-d3af-4e73-bb7e-ff6a4919ccf7/barbican-api/0.log" Oct 14 11:20:59 crc kubenswrapper[4698]: I1014 11:20:59.413355 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66df6b94fb-sw6kf_35f476fe-d3af-4e73-bb7e-ff6a4919ccf7/barbican-api-log/0.log" Oct 14 11:20:59 crc kubenswrapper[4698]: I1014 11:20:59.616287 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77cb48f668-xz2r9_3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606/barbican-keystone-listener/0.log" Oct 14 11:20:59 crc kubenswrapper[4698]: I1014 11:20:59.864569 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bfd556cc-6z8fb_27f5b9bc-1a92-40a7-b615-7c8a726cd2e8/barbican-worker/0.log" Oct 14 11:20:59 crc kubenswrapper[4698]: I1014 11:20:59.883577 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-74bfd556cc-6z8fb_27f5b9bc-1a92-40a7-b615-7c8a726cd2e8/barbican-worker-log/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.150506 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lq4cr_d4559dff-03d5-4c1b-a8df-f8fc0ae935de/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.440893 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77cb48f668-xz2r9_3a9f4039-e577-4dc0-b1f9-ecdf1e7a9606/barbican-keystone-listener-log/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.453155 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/ceilometer-central-agent/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.456286 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/proxy-httpd/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.484555 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/ceilometer-notification-agent/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.644431 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea396a85-5a42-41f7-a75c-1aca7fc4dd37/sg-core/0.log" Oct 14 11:21:00 crc kubenswrapper[4698]: I1014 11:21:00.772904 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_fab31a39-0774-45d5-a5cd-cc337066aa80/ceph/0.log" Oct 14 11:21:01 crc kubenswrapper[4698]: I1014 11:21:01.236012 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_78a024b7-16f4-4177-8b52-0cecbc173247/cinder-api-log/0.log" Oct 14 11:21:01 crc kubenswrapper[4698]: I1014 11:21:01.261607 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_78a024b7-16f4-4177-8b52-0cecbc173247/cinder-api/0.log" Oct 14 11:21:01 crc kubenswrapper[4698]: I1014 11:21:01.290832 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a03e3bf1-857d-4f91-ad0e-254605774e3c/probe/0.log" Oct 14 11:21:01 crc kubenswrapper[4698]: I1014 11:21:01.597275 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef857e49-6a95-4e1c-a170-a9b7cf5b095f/probe/0.log" Oct 14 11:21:01 crc kubenswrapper[4698]: I1014 11:21:01.657943 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef857e49-6a95-4e1c-a170-a9b7cf5b095f/cinder-scheduler/0.log" Oct 14 11:21:02 crc kubenswrapper[4698]: I1014 11:21:02.053728 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_07b37a90-cc29-48f1-9da0-d2b0a9fc6d85/probe/0.log" Oct 14 11:21:02 crc kubenswrapper[4698]: I1014 11:21:02.321194 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sjtbb_43529126-1bd9-4a80-bf14-99b218ef939c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:02 crc kubenswrapper[4698]: I1014 11:21:02.542463 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tbpt4_601bc78a-d499-4391-ada7-44e34c35c547/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:02 crc kubenswrapper[4698]: I1014 11:21:02.797409 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xhpjx_57bb4dc3-77b1-43e2-9360-c2f0d7354f4f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:02 crc kubenswrapper[4698]: I1014 11:21:02.942183 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/init/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.179926 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/init/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.454227 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-jzkll_d9761eef-5d4d-4aa8-90a8-c94412431e3c/dnsmasq-dns/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.551213 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zln6c_0e135199-5913-440f-a291-4252ae734b96/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.713238 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a46350f-38b2-4150-aef2-6c2a336a22f9/glance-httpd/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.756101 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a46350f-38b2-4150-aef2-6c2a336a22f9/glance-log/0.log" Oct 14 11:21:03 crc kubenswrapper[4698]: I1014 11:21:03.998124 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a03e3bf1-857d-4f91-ad0e-254605774e3c/cinder-backup/0.log" Oct 14 11:21:04 crc kubenswrapper[4698]: I1014 11:21:04.003992 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d72ae1c-cd0b-42d9-b438-c80428436dd3/glance-log/0.log" Oct 14 11:21:04 crc kubenswrapper[4698]: I1014 11:21:04.017551 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d72ae1c-cd0b-42d9-b438-c80428436dd3/glance-httpd/0.log" Oct 14 11:21:04 crc kubenswrapper[4698]: I1014 11:21:04.445382 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cf95ddffb-6h2bm_746d0a6a-4df6-40b6-9600-63ec14336507/horizon/0.log" Oct 14 11:21:04 crc kubenswrapper[4698]: I1014 11:21:04.491928 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vc6lb_1c6a03cb-a418-4b77-b5d0-51fe6ef1ce53/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:04 crc kubenswrapper[4698]: I1014 11:21:04.702027 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6hw2q_06e79464-f4ba-47d3-a98d-d75709932309/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.027904 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340661-dn2xz_9f7aded7-281a-4d4b-ab0d-7e52eda65441/keystone-cron/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.109190 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cf95ddffb-6h2bm_746d0a6a-4df6-40b6-9600-63ec14336507/horizon-log/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.246235 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e4f715a0-2f1f-4831-a8ce-a629264ac73f/kube-state-metrics/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.299978 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_07b37a90-cc29-48f1-9da0-d2b0a9fc6d85/cinder-volume/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.484283 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-kj2jl_141d36f8-e9f9-4959-8f0c-09c649350547/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:05 crc kubenswrapper[4698]: I1014 11:21:05.953433 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b654944c-c016-4506-8ee0-2b23eeafcaca/probe/0.log" Oct 14 11:21:06 crc kubenswrapper[4698]: I1014 11:21:06.134638 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b654944c-c016-4506-8ee0-2b23eeafcaca/manila-scheduler/0.log" Oct 14 11:21:06 crc kubenswrapper[4698]: I1014 11:21:06.252116 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_99f5e356-0b01-4991-b2b2-3e0456eba2e7/manila-api/0.log" Oct 14 11:21:06 crc kubenswrapper[4698]: I1014 11:21:06.484921 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce/probe/0.log" Oct 14 11:21:06 crc kubenswrapper[4698]: I1014 11:21:06.724529 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_99f5e356-0b01-4991-b2b2-3e0456eba2e7/manila-api-log/0.log" Oct 14 11:21:06 crc kubenswrapper[4698]: I1014 11:21:06.761198 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_ba3b2198-b260-4fe9-ad12-dd23c9e9d4ce/manila-share/0.log" Oct 14 11:21:07 crc kubenswrapper[4698]: I1014 11:21:07.288625 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fvlb6_9f3eaa62-6c1e-406d-acec-135973addacf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:07 crc kubenswrapper[4698]: I1014 11:21:07.671909 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cf664b6c9-t6wfc_0082817f-4bcf-434b-8fb7-1e8ae2acf058/neutron-httpd/0.log" Oct 14 11:21:08 crc kubenswrapper[4698]: I1014 11:21:08.346679 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cf664b6c9-t6wfc_0082817f-4bcf-434b-8fb7-1e8ae2acf058/neutron-api/0.log" Oct 14 11:21:08 crc kubenswrapper[4698]: I1014 11:21:08.909057 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75d9cb9c4-g8g58_fddbac4f-ca34-45b0-913b-21e399aab117/keystone-api/0.log" Oct 14 11:21:09 crc kubenswrapper[4698]: I1014 11:21:09.622970 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_636dfb32-7180-4af9-9de0-57745de8c7e7/nova-cell0-conductor-conductor/0.log" Oct 14 11:21:09 crc kubenswrapper[4698]: I1014 11:21:09.860487 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e4bcef82-1d46-45b0-b831-7c575c80b1f4/nova-cell1-conductor-conductor/0.log" Oct 14 11:21:10 crc kubenswrapper[4698]: I1014 11:21:10.329587 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f114cc4a-8234-441d-926f-83ac36f9ff5b/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 11:21:10 crc kubenswrapper[4698]: I1014 11:21:10.428069 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066120b9-3158-4234-873d-178f6b65885c/nova-api-log/0.log" Oct 14 11:21:10 crc kubenswrapper[4698]: I1014 11:21:10.533757 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pt8b6_b25db8a8-2e32-4634-b5e6-b21d7497c0ca/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:10 crc kubenswrapper[4698]: I1014 11:21:10.731122 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28c60a34-a183-4fd8-a84e-2963e6676914/nova-metadata-log/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.349059 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_066120b9-3158-4234-873d-178f6b65885c/nova-api-api/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.374777 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/mysql-bootstrap/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.521293 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0ec9e017-c819-42ce-8a1f-73b89dfa0459/nova-scheduler-scheduler/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.583416 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/mysql-bootstrap/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.610023 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f3078d2-396d-4f2a-913f-b5c5555e568d/galera/0.log" Oct 14 11:21:11 crc kubenswrapper[4698]: I1014 11:21:11.861710 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/mysql-bootstrap/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.000079 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/mysql-bootstrap/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.098388 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_90244b70-b4fa-4b40-a962-119168333566/galera/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.228903 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9b9ad197-b532-42c9-8ac2-c822cca96a52/openstackclient/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.407936 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-24vqt_b64163c4-e040-4bec-a585-c55f9d05e948/ovn-controller/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.619217 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m8cgb_47ef312e-a1ef-4635-a052-31f0b3a7e742/openstack-network-exporter/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.751019 4698 scope.go:117] "RemoveContainer" containerID="0f39036a1471e443e6fcae9ce8994c792c0e5ddaf728620de57c70b2776f3d2f" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.773735 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server-init/0.log" Oct 14 11:21:12 crc kubenswrapper[4698]: I1014 11:21:12.946300 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_28c60a34-a183-4fd8-a84e-2963e6676914/nova-metadata-metadata/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.123070 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovs-vswitchd/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.182069 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server-init/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.220725 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2cb6b_62b0f0f2-3cb3-4ba7-8387-a3bd2a38debe/ovsdb-server/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.410763 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qwfqd_464b6c8a-27cc-4899-a7ed-5e2d022e91da/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.455147 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b35b471e-f011-42c9-998a-d23ec21ad1a9/openstack-network-exporter/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.628113 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b35b471e-f011-42c9-998a-d23ec21ad1a9/ovn-northd/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.638370 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_884f9a07-9f80-44ff-a1e5-805d6d5ef6fb/openstack-network-exporter/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.716379 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_884f9a07-9f80-44ff-a1e5-805d6d5ef6fb/ovsdbserver-nb/0.log" Oct 14 11:21:13 crc kubenswrapper[4698]: I1014 11:21:13.855090 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_468f15c4-08a4-4e2e-a65d-7a679b1d3a3f/openstack-network-exporter/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.048315 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_468f15c4-08a4-4e2e-a65d-7a679b1d3a3f/ovsdbserver-sb/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.344982 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/setup-container/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.585794 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/setup-container/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.632630 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebebf3c-b368-424c-a1bc-a3b9fc82ac3e/rabbitmq/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.804535 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dd765df5b-xsd5h_25021023-544e-4b23-947b-66102dcf790e/placement-api/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.856932 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/setup-container/0.log" Oct 14 11:21:14 crc kubenswrapper[4698]: I1014 11:21:14.910304 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dd765df5b-xsd5h_25021023-544e-4b23-947b-66102dcf790e/placement-log/0.log" Oct 14 11:21:15 crc kubenswrapper[4698]: I1014 11:21:15.766661 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-spcw8_310c1648-fc92-4008-8e7c-ff410b890a2b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:15 crc kubenswrapper[4698]: I1014 11:21:15.770676 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/rabbitmq/0.log" Oct 14 11:21:15 crc kubenswrapper[4698]: I1014 11:21:15.850605 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a14f78a2-c755-4288-bf05-45f4a540d301/setup-container/0.log" Oct 14 11:21:16 crc kubenswrapper[4698]: I1014 11:21:16.011533 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-jdls2_06b2a1a6-bc42-4191-9ab7-62c064090d6b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:16 crc kubenswrapper[4698]: I1014 11:21:16.187910 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8xbmw_3e24ecfd-2fed-4c41-be7f-89fe09f13724/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:16 crc kubenswrapper[4698]: I1014 11:21:16.343989 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xc794_374455db-3111-424a-82eb-0960266ac879/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:16 crc kubenswrapper[4698]: I1014 11:21:16.492603 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-49gfb_86317787-19aa-4ea7-a4ff-3e604d9c0497/ssh-known-hosts-edpm-deployment/0.log" Oct 14 11:21:16 crc kubenswrapper[4698]: I1014 11:21:16.924188 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58759987c5-vr6vx_3a1278dc-c5df-49ed-8c8e-6284281cf240/proxy-server/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.024160 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58759987c5-vr6vx_3a1278dc-c5df-49ed-8c8e-6284281cf240/proxy-httpd/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.371143 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-reaper/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.383498 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-auditor/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.461690 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-fmnnt_332a15eb-0ada-4f42-a34e-a7d2e9c46af2/swift-ring-rebalance/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.624705 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-replicator/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.680115 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/account-server/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.706733 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-auditor/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.735725 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-replicator/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.836241 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-server/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.864341 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/container-updater/0.log" Oct 14 11:21:17 crc kubenswrapper[4698]: I1014 11:21:17.992021 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-auditor/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.000822 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-expirer/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.085352 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-replicator/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.135069 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-server/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.206157 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/object-updater/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.313595 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/rsync/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.370165 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0ca6729c-82ca-4f89-b732-7154ec9224bb/swift-recon-cron/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.509671 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pstgf_45519f65-bf50-47f3-a645-8d64d05ab523/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.729062 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e5a71af4-fdf3-4a49-9ada-2d4836409022/tempest-tests-tempest-tests-runner/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.773575 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_89057adf-a70c-48dc-a8fc-65077d5c29d1/test-operator-logs-container/0.log" Oct 14 11:21:18 crc kubenswrapper[4698]: I1014 11:21:18.925845 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7ngns_fc38db3e-e819-4f43-a14a-c83162ceb5fa/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 11:21:23 crc kubenswrapper[4698]: I1014 11:21:23.907632 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:21:23 crc kubenswrapper[4698]: I1014 11:21:23.907987 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:21:23 crc kubenswrapper[4698]: I1014 11:21:23.908038 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 11:21:23 crc kubenswrapper[4698]: I1014 11:21:23.908808 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 11:21:23 crc kubenswrapper[4698]: I1014 11:21:23.908857 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5" gracePeriod=600 Oct 14 11:21:24 crc kubenswrapper[4698]: I1014 11:21:24.424845 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5" exitCode=0 Oct 14 11:21:24 crc kubenswrapper[4698]: I1014 11:21:24.424908 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5"} Oct 14 11:21:24 crc kubenswrapper[4698]: I1014 11:21:24.425213 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerStarted","Data":"829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345"} Oct 14 11:21:24 crc kubenswrapper[4698]: I1014 11:21:24.425237 4698 scope.go:117] "RemoveContainer" containerID="6bf915c0eb83523a7704ee4a405fd8ffeef326ab1d420e2bdc7a7ecf61f24f91" Oct 14 11:21:26 crc kubenswrapper[4698]: I1014 11:21:26.436838 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e3bc7f78-d69f-426c-9aeb-4837d25635ab/memcached/0.log" Oct 14 11:21:46 crc kubenswrapper[4698]: I1014 11:21:46.675527 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:21:46 crc kubenswrapper[4698]: I1014 11:21:46.824111 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:21:46 crc kubenswrapper[4698]: I1014 11:21:46.865293 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:21:46 crc kubenswrapper[4698]: I1014 11:21:46.869696 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.056013 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/util/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.098702 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/pull/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.108507 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_29dc144f0e8a966729881e968c4b77154efe1ade540adc10ce92055bf2z82rv_fc2cb835-38cd-43b4-bf06-15d3ccc7ed5a/extract/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.275149 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-5wlw6_24d6e9c5-aad5-4856-a7b7-20e04553c864/kube-rbac-proxy/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.341027 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nh6c5_f91fec87-379e-4c52-9d03-b56841232184/kube-rbac-proxy/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.351949 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-5wlw6_24d6e9c5-aad5-4856-a7b7-20e04553c864/manager/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.526780 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nh6c5_f91fec87-379e-4c52-9d03-b56841232184/manager/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.567965 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-nh4nc_6d1a4e09-e83d-4634-ae32-b37666d65f61/kube-rbac-proxy/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.590600 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-nh4nc_6d1a4e09-e83d-4634-ae32-b37666d65f61/manager/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.811876 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-f6jr7_b310d6c3-527e-4a58-bc98-edcd7731b9e3/kube-rbac-proxy/0.log" Oct 14 11:21:47 crc kubenswrapper[4698]: I1014 11:21:47.899686 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-f6jr7_b310d6c3-527e-4a58-bc98-edcd7731b9e3/manager/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.466977 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-52h2t_004d5489-901d-4fd3-9fc3-ae0016255950/kube-rbac-proxy/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.535600 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-nq8vb_3482400d-0e9f-4dc5-883f-36313dc33944/kube-rbac-proxy/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.544439 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-52h2t_004d5489-901d-4fd3-9fc3-ae0016255950/manager/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.742976 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2jvxv_4ea0ebfe-fbe9-428c-baf6-565e4dbb9044/kube-rbac-proxy/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.744065 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-nq8vb_3482400d-0e9f-4dc5-883f-36313dc33944/manager/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.949549 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2jvxv_4ea0ebfe-fbe9-428c-baf6-565e4dbb9044/manager/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.979505 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-d9qkb_2547a997-b2ba-4300-92ed-09ccc57499c7/kube-rbac-proxy/0.log" Oct 14 11:21:48 crc kubenswrapper[4698]: I1014 11:21:48.984197 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-d9qkb_2547a997-b2ba-4300-92ed-09ccc57499c7/manager/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.174456 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-ks5tw_d6a101ad-e350-4964-a786-91072a6776e8/kube-rbac-proxy/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.309154 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-ks5tw_d6a101ad-e350-4964-a786-91072a6776e8/manager/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.434084 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-nb7fk_f55ae8f2-2a7c-4158-b125-2121c37fc874/manager/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.461564 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-nb7fk_f55ae8f2-2a7c-4158-b125-2121c37fc874/kube-rbac-proxy/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.597228 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-nds58_cca3d0fd-d9aa-428f-95f2-14238b7cf627/kube-rbac-proxy/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.684728 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-nds58_cca3d0fd-d9aa-428f-95f2-14238b7cf627/manager/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.753835 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-kmzfd_0660342f-b230-41a7-a2f8-44cd75696095/kube-rbac-proxy/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.893544 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-kmzfd_0660342f-b230-41a7-a2f8-44cd75696095/manager/0.log" Oct 14 11:21:49 crc kubenswrapper[4698]: I1014 11:21:49.950073 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-c49pm_84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7/kube-rbac-proxy/0.log" Oct 14 11:21:50 crc kubenswrapper[4698]: I1014 11:21:50.110981 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-c49pm_84ef40a7-48ce-4d65-9e34-5ac4e4f0b0b7/manager/0.log" Oct 14 11:21:50 crc kubenswrapper[4698]: I1014 11:21:50.822892 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-zfmdv_442ecb91-0479-42a8-94ba-5be7d8cea79f/kube-rbac-proxy/0.log" Oct 14 11:21:50 crc kubenswrapper[4698]: I1014 11:21:50.838401 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc_2486fbf6-b25f-4bc3-932d-5ade782da654/kube-rbac-proxy/0.log" Oct 14 11:21:50 crc kubenswrapper[4698]: I1014 11:21:50.851333 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-zfmdv_442ecb91-0479-42a8-94ba-5be7d8cea79f/manager/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.046530 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dblfbc_2486fbf6-b25f-4bc3-932d-5ade782da654/manager/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.113564 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768cc76f8b-7jr79_ecf62cd7-15b2-4bcc-aadd-1c982c7149e7/kube-rbac-proxy/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.296399 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7fc68b75ff-gn564_8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1/kube-rbac-proxy/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.459243 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2wch9_9e25898b-e095-4f25-be09-70befbd919b5/registry-server/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.472012 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7fc68b75ff-gn564_8e9ffe4b-a420-4553-b9d1-90fbc4ed2fb1/operator/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.617712 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-xpv9w_3e3e37b3-e0ed-479a-9124-aa6c814a1030/kube-rbac-proxy/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.765474 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-xpv9w_3e3e37b3-e0ed-479a-9124-aa6c814a1030/manager/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.800135 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-wdrpw_4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a/kube-rbac-proxy/0.log" Oct 14 11:21:51 crc kubenswrapper[4698]: I1014 11:21:51.843324 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-wdrpw_4bdceb7a-7a1f-4c0b-a70d-787a610f1d3a/manager/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.086588 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-zctmt_052a38cb-bdfa-46de-ab53-e81b2f014b1d/operator/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.121010 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-7p5xj_28b97988-e327-4c7a-aab5-5985bf4a675d/kube-rbac-proxy/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.260163 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-7p5xj_28b97988-e327-4c7a-aab5-5985bf4a675d/manager/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.281986 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-768cc76f8b-7jr79_ecf62cd7-15b2-4bcc-aadd-1c982c7149e7/manager/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.345977 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xbvhq_e93508a8-6ee5-4950-8cea-7c3599b7e1ec/kube-rbac-proxy/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.395830 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-xbvhq_e93508a8-6ee5-4950-8cea-7c3599b7e1ec/manager/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.515287 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-g5fmn_a969812c-8490-4e43-ab00-73c8254c5b21/kube-rbac-proxy/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.516773 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-g5fmn_a969812c-8490-4e43-ab00-73c8254c5b21/manager/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.576928 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-n4xkf_5b887a4b-1049-4b80-8613-89ef2f446df4/kube-rbac-proxy/0.log" Oct 14 11:21:52 crc kubenswrapper[4698]: I1014 11:21:52.665031 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-n4xkf_5b887a4b-1049-4b80-8613-89ef2f446df4/manager/0.log" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.110136 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:01 crc kubenswrapper[4698]: E1014 11:22:01.111050 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b0cfbb-37b1-471b-89e5-379b769ca0a0" containerName="container-00" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.111064 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b0cfbb-37b1-471b-89e5-379b769ca0a0" containerName="container-00" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.111292 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b0cfbb-37b1-471b-89e5-379b769ca0a0" containerName="container-00" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.112639 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.131743 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.280729 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz77j\" (UniqueName: \"kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.280837 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.280874 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.389189 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz77j\" (UniqueName: \"kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.389277 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.389314 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.389878 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.389923 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.423614 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz77j\" (UniqueName: \"kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j\") pod \"community-operators-bnjrm\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:01 crc kubenswrapper[4698]: I1014 11:22:01.442387 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:02 crc kubenswrapper[4698]: I1014 11:22:02.030259 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:02 crc kubenswrapper[4698]: I1014 11:22:02.790337 4698 generic.go:334] "Generic (PLEG): container finished" podID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerID="442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732" exitCode=0 Oct 14 11:22:02 crc kubenswrapper[4698]: I1014 11:22:02.790441 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerDied","Data":"442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732"} Oct 14 11:22:02 crc kubenswrapper[4698]: I1014 11:22:02.790836 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerStarted","Data":"3e4c1508495a5881f47223169ec7d8130c5cdbbe3ec84560a2afb2369236d0de"} Oct 14 11:22:02 crc kubenswrapper[4698]: I1014 11:22:02.792917 4698 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 11:22:04 crc kubenswrapper[4698]: I1014 11:22:04.812879 4698 generic.go:334] "Generic (PLEG): container finished" podID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerID="5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241" exitCode=0 Oct 14 11:22:04 crc kubenswrapper[4698]: I1014 11:22:04.812979 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerDied","Data":"5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241"} Oct 14 11:22:05 crc kubenswrapper[4698]: I1014 11:22:05.828463 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerStarted","Data":"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e"} Oct 14 11:22:05 crc kubenswrapper[4698]: I1014 11:22:05.860160 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnjrm" podStartSLOduration=2.359561072 podStartE2EDuration="4.8601302s" podCreationTimestamp="2025-10-14 11:22:01 +0000 UTC" firstStartedPulling="2025-10-14 11:22:02.792702036 +0000 UTC m=+5104.490001452" lastFinishedPulling="2025-10-14 11:22:05.293271134 +0000 UTC m=+5106.990570580" observedRunningTime="2025-10-14 11:22:05.851496015 +0000 UTC m=+5107.548795441" watchObservedRunningTime="2025-10-14 11:22:05.8601302 +0000 UTC m=+5107.557429616" Oct 14 11:22:09 crc kubenswrapper[4698]: I1014 11:22:09.636695 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5k2p9_c1ec2959-9fc5-4b98-8f9c-c21fc57e14d7/control-plane-machine-set-operator/0.log" Oct 14 11:22:09 crc kubenswrapper[4698]: I1014 11:22:09.790980 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmxzf_77041b5d-f53d-425c-b824-a61833af677c/kube-rbac-proxy/0.log" Oct 14 11:22:09 crc kubenswrapper[4698]: I1014 11:22:09.801393 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmxzf_77041b5d-f53d-425c-b824-a61833af677c/machine-api-operator/0.log" Oct 14 11:22:11 crc kubenswrapper[4698]: I1014 11:22:11.443379 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:11 crc kubenswrapper[4698]: I1014 11:22:11.443731 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:11 crc kubenswrapper[4698]: I1014 11:22:11.517348 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:11 crc kubenswrapper[4698]: I1014 11:22:11.973415 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:12 crc kubenswrapper[4698]: I1014 11:22:12.031940 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:13 crc kubenswrapper[4698]: I1014 11:22:13.892177 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnjrm" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="registry-server" containerID="cri-o://80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e" gracePeriod=2 Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.409902 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.583915 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content\") pod \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.583973 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz77j\" (UniqueName: \"kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j\") pod \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.584070 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities\") pod \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\" (UID: \"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6\") " Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.585130 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities" (OuterVolumeSpecName: "utilities") pod "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" (UID: "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.611775 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j" (OuterVolumeSpecName: "kube-api-access-jz77j") pod "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" (UID: "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6"). InnerVolumeSpecName "kube-api-access-jz77j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.690714 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz77j\" (UniqueName: \"kubernetes.io/projected/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-kube-api-access-jz77j\") on node \"crc\" DevicePath \"\"" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.690787 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.891701 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" (UID: "88c0a579-b5b9-4f0e-a58a-ba981e69b3c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.897143 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.903095 4698 generic.go:334] "Generic (PLEG): container finished" podID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerID="80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e" exitCode=0 Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.903137 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerDied","Data":"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e"} Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.903169 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnjrm" event={"ID":"88c0a579-b5b9-4f0e-a58a-ba981e69b3c6","Type":"ContainerDied","Data":"3e4c1508495a5881f47223169ec7d8130c5cdbbe3ec84560a2afb2369236d0de"} Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.903189 4698 scope.go:117] "RemoveContainer" containerID="80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.903335 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnjrm" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.935354 4698 scope.go:117] "RemoveContainer" containerID="5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241" Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.938430 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.946799 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnjrm"] Oct 14 11:22:14 crc kubenswrapper[4698]: I1014 11:22:14.966160 4698 scope.go:117] "RemoveContainer" containerID="442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.002965 4698 scope.go:117] "RemoveContainer" containerID="80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e" Oct 14 11:22:15 crc kubenswrapper[4698]: E1014 11:22:15.003407 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e\": container with ID starting with 80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e not found: ID does not exist" containerID="80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.003466 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e"} err="failed to get container status \"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e\": rpc error: code = NotFound desc = could not find container \"80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e\": container with ID starting with 80b3a46741bb6628f85225abfd25ff4a6304ad34466d71b7a14be9ad2cf1a76e not found: ID does not exist" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.003496 4698 scope.go:117] "RemoveContainer" containerID="5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241" Oct 14 11:22:15 crc kubenswrapper[4698]: E1014 11:22:15.003845 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241\": container with ID starting with 5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241 not found: ID does not exist" containerID="5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.003881 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241"} err="failed to get container status \"5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241\": rpc error: code = NotFound desc = could not find container \"5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241\": container with ID starting with 5f385e3e32373047aa8d98520afcc7449ecbdb6b6d54f716f6ca88ab4ac7b241 not found: ID does not exist" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.003910 4698 scope.go:117] "RemoveContainer" containerID="442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732" Oct 14 11:22:15 crc kubenswrapper[4698]: E1014 11:22:15.004102 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732\": container with ID starting with 442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732 not found: ID does not exist" containerID="442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.004124 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732"} err="failed to get container status \"442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732\": rpc error: code = NotFound desc = could not find container \"442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732\": container with ID starting with 442a18e53b3cafa0020824361ab039fcd49ab1ba33e44d7116889023b3ca4732 not found: ID does not exist" Oct 14 11:22:15 crc kubenswrapper[4698]: I1014 11:22:15.028799 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" path="/var/lib/kubelet/pods/88c0a579-b5b9-4f0e-a58a-ba981e69b3c6/volumes" Oct 14 11:22:21 crc kubenswrapper[4698]: I1014 11:22:21.788705 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6hh8m_4e2060ed-feb8-4937-a34d-58686e380b4b/cert-manager-controller/0.log" Oct 14 11:22:21 crc kubenswrapper[4698]: I1014 11:22:21.939440 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-f44qv_c425683e-dad1-4ebb-8992-8a979383addb/cert-manager-cainjector/0.log" Oct 14 11:22:22 crc kubenswrapper[4698]: I1014 11:22:22.067776 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-92ws8_1e6c98fe-c2ad-4723-ab60-1af5e7e3e58c/cert-manager-webhook/0.log" Oct 14 11:22:34 crc kubenswrapper[4698]: I1014 11:22:34.615191 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-5zqgv_afaa96d5-b448-47b4-ac36-b8d4d232441b/nmstate-console-plugin/0.log" Oct 14 11:22:34 crc kubenswrapper[4698]: I1014 11:22:34.835485 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tb9f6_359df233-fb7a-4d84-888a-d6fa99ed8b55/nmstate-handler/0.log" Oct 14 11:22:34 crc kubenswrapper[4698]: I1014 11:22:34.863251 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-nq56c_a1d21132-5dfd-4813-9c39-d4be39666a38/kube-rbac-proxy/0.log" Oct 14 11:22:34 crc kubenswrapper[4698]: I1014 11:22:34.891702 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-nq56c_a1d21132-5dfd-4813-9c39-d4be39666a38/nmstate-metrics/0.log" Oct 14 11:22:35 crc kubenswrapper[4698]: I1014 11:22:35.054343 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-669hf_fd11f615-dce1-42f4-8470-d1117fe3305b/nmstate-operator/0.log" Oct 14 11:22:35 crc kubenswrapper[4698]: I1014 11:22:35.130390 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-8vhjp_3da8a241-b3ad-480d-aff7-f571b43fb673/nmstate-webhook/0.log" Oct 14 11:22:51 crc kubenswrapper[4698]: I1014 11:22:51.265517 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-4rhbc_e165bb03-3546-4ae5-8c3c-5605cae81371/kube-rbac-proxy/0.log" Oct 14 11:22:51 crc kubenswrapper[4698]: I1014 11:22:51.354203 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-4rhbc_e165bb03-3546-4ae5-8c3c-5605cae81371/controller/0.log" Oct 14 11:22:51 crc kubenswrapper[4698]: I1014 11:22:51.729587 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-wqqdm_db7dd36b-e7d3-4eed-b55f-cc3316be8e85/frr-k8s-webhook-server/0.log" Oct 14 11:22:51 crc kubenswrapper[4698]: I1014 11:22:51.838432 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:22:51 crc kubenswrapper[4698]: I1014 11:22:51.997542 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.000319 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.051057 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.067141 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.312342 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.318092 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.352538 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.396409 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.522101 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-metrics/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.532176 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-frr-files/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.584706 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/controller/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.604556 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/cp-reloader/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.706504 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/frr-metrics/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.805589 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/kube-rbac-proxy-frr/0.log" Oct 14 11:22:52 crc kubenswrapper[4698]: I1014 11:22:52.872577 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/kube-rbac-proxy/0.log" Oct 14 11:22:53 crc kubenswrapper[4698]: I1014 11:22:53.057951 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/reloader/0.log" Oct 14 11:22:53 crc kubenswrapper[4698]: I1014 11:22:53.138087 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7556747f48-jxr6w_7f8afa35-0e83-439b-80cb-31f3da9293de/manager/0.log" Oct 14 11:22:53 crc kubenswrapper[4698]: I1014 11:22:53.272985 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cd79cbbb8-dcbnr_2cc70ba0-d097-4987-b877-fc209e27f275/webhook-server/0.log" Oct 14 11:22:53 crc kubenswrapper[4698]: I1014 11:22:53.605859 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-847mc_7ab10af0-2cb8-4ff4-bb4c-a186a319ce37/kube-rbac-proxy/0.log" Oct 14 11:22:54 crc kubenswrapper[4698]: I1014 11:22:54.040260 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-847mc_7ab10af0-2cb8-4ff4-bb4c-a186a319ce37/speaker/0.log" Oct 14 11:22:54 crc kubenswrapper[4698]: I1014 11:22:54.371279 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wpbs6_371eed8f-9f1c-4114-98c6-33c8abf3fa23/frr/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.527304 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.662755 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.693091 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.742370 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.903525 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/extract/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.916816 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/pull/0.log" Oct 14 11:23:07 crc kubenswrapper[4698]: I1014 11:23:07.935264 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ml8md_fec6695b-3ca9-4ae5-83f8-23cf2289cb14/util/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.092845 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.290537 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.299406 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.308377 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.520093 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-utilities/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.770278 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/extract-content/0.log" Oct 14 11:23:08 crc kubenswrapper[4698]: I1014 11:23:08.961547 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.116514 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.124422 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wpc8b_1e06ea61-0f4c-4611-a4d8-dcf08a89c881/registry-server/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.165416 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.190668 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.443197 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-utilities/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.545120 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/extract-content/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.726038 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.849372 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8qgcm_a05b8344-c3cc-41ea-88c6-e13f29ebedbb/registry-server/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.963235 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:23:09 crc kubenswrapper[4698]: I1014 11:23:09.985749 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.040737 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.776570 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/util/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.788981 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/pull/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.822452 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835crcqqd_51f6a7d5-1c06-4f2b-9f66-322882e6db29/extract/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.913994 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n6rkb_717ff5f8-f2f0-46ca-86e2-dba0533d1f69/marketplace-operator/0.log" Oct 14 11:23:10 crc kubenswrapper[4698]: I1014 11:23:10.974203 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.186988 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.241740 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.271666 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.441918 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-utilities/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.504399 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.542584 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.585943 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqffv_027d093d-8507-4449-9248-3c1da8a30e2e/registry-server/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.735939 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.748629 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.753416 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.931115 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-content/0.log" Oct 14 11:23:11 crc kubenswrapper[4698]: I1014 11:23:11.977015 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/extract-utilities/0.log" Oct 14 11:23:12 crc kubenswrapper[4698]: I1014 11:23:12.422526 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r85p_48bfadf8-08ed-4688-917d-818b9f91abcf/registry-server/0.log" Oct 14 11:23:53 crc kubenswrapper[4698]: I1014 11:23:53.907983 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:23:53 crc kubenswrapper[4698]: I1014 11:23:53.908458 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:24:23 crc kubenswrapper[4698]: I1014 11:24:23.908662 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:24:23 crc kubenswrapper[4698]: I1014 11:24:23.909739 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:24:53 crc kubenswrapper[4698]: I1014 11:24:53.908579 4698 patch_prober.go:28] interesting pod/machine-config-daemon-lp4sk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 11:24:53 crc kubenswrapper[4698]: I1014 11:24:53.909120 4698 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 11:24:53 crc kubenswrapper[4698]: I1014 11:24:53.909165 4698 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" Oct 14 11:24:53 crc kubenswrapper[4698]: I1014 11:24:53.910044 4698 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345"} pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 11:24:53 crc kubenswrapper[4698]: I1014 11:24:53.910102 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerName="machine-config-daemon" containerID="cri-o://829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" gracePeriod=600 Oct 14 11:24:54 crc kubenswrapper[4698]: E1014 11:24:54.057335 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:24:54 crc kubenswrapper[4698]: I1014 11:24:54.459790 4698 generic.go:334] "Generic (PLEG): container finished" podID="c359a8fc-1e2f-49af-8da2-719d52bd969a" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" exitCode=0 Oct 14 11:24:54 crc kubenswrapper[4698]: I1014 11:24:54.459839 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" event={"ID":"c359a8fc-1e2f-49af-8da2-719d52bd969a","Type":"ContainerDied","Data":"829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345"} Oct 14 11:24:54 crc kubenswrapper[4698]: I1014 11:24:54.459875 4698 scope.go:117] "RemoveContainer" containerID="f4b03268cb180ac97d7c15f1e56610d8fe5c9eb93165b852cd481e9e707e50a5" Oct 14 11:24:54 crc kubenswrapper[4698]: I1014 11:24:54.460673 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:24:54 crc kubenswrapper[4698]: E1014 11:24:54.461024 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:25:07 crc kubenswrapper[4698]: I1014 11:25:07.018654 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:25:07 crc kubenswrapper[4698]: E1014 11:25:07.019881 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.419812 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:08 crc kubenswrapper[4698]: E1014 11:25:08.420514 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="extract-content" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.420529 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="extract-content" Oct 14 11:25:08 crc kubenswrapper[4698]: E1014 11:25:08.420547 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="extract-utilities" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.420553 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="extract-utilities" Oct 14 11:25:08 crc kubenswrapper[4698]: E1014 11:25:08.420589 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="registry-server" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.420595 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="registry-server" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.420804 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c0a579-b5b9-4f0e-a58a-ba981e69b3c6" containerName="registry-server" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.422263 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.433820 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.463735 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.463854 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.464014 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdf5v\" (UniqueName: \"kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.566132 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdf5v\" (UniqueName: \"kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.566206 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.566281 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.566846 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.566882 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.589663 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdf5v\" (UniqueName: \"kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v\") pod \"redhat-operators-xzztd\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:08 crc kubenswrapper[4698]: I1014 11:25:08.786385 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:09 crc kubenswrapper[4698]: I1014 11:25:09.281803 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:09 crc kubenswrapper[4698]: I1014 11:25:09.638637 4698 generic.go:334] "Generic (PLEG): container finished" podID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerID="54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9" exitCode=0 Oct 14 11:25:09 crc kubenswrapper[4698]: I1014 11:25:09.638741 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerDied","Data":"54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9"} Oct 14 11:25:09 crc kubenswrapper[4698]: I1014 11:25:09.638932 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerStarted","Data":"c293677e9c8422a53dba2e24b92db9b19fe3827cba393aeae86fcaccb07f9861"} Oct 14 11:25:11 crc kubenswrapper[4698]: I1014 11:25:11.678136 4698 generic.go:334] "Generic (PLEG): container finished" podID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerID="1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f" exitCode=0 Oct 14 11:25:11 crc kubenswrapper[4698]: I1014 11:25:11.678221 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerDied","Data":"1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f"} Oct 14 11:25:12 crc kubenswrapper[4698]: I1014 11:25:12.690595 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerStarted","Data":"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800"} Oct 14 11:25:12 crc kubenswrapper[4698]: I1014 11:25:12.712398 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xzztd" podStartSLOduration=2.116634842 podStartE2EDuration="4.712368429s" podCreationTimestamp="2025-10-14 11:25:08 +0000 UTC" firstStartedPulling="2025-10-14 11:25:09.643215446 +0000 UTC m=+5291.340514852" lastFinishedPulling="2025-10-14 11:25:12.238949023 +0000 UTC m=+5293.936248439" observedRunningTime="2025-10-14 11:25:12.710486166 +0000 UTC m=+5294.407785582" watchObservedRunningTime="2025-10-14 11:25:12.712368429 +0000 UTC m=+5294.409667835" Oct 14 11:25:18 crc kubenswrapper[4698]: I1014 11:25:18.788653 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:18 crc kubenswrapper[4698]: I1014 11:25:18.789118 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:18 crc kubenswrapper[4698]: I1014 11:25:18.837868 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:19 crc kubenswrapper[4698]: I1014 11:25:19.814309 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:19 crc kubenswrapper[4698]: I1014 11:25:19.863755 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:21 crc kubenswrapper[4698]: I1014 11:25:21.016909 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:25:21 crc kubenswrapper[4698]: E1014 11:25:21.017440 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:25:21 crc kubenswrapper[4698]: I1014 11:25:21.800434 4698 generic.go:334] "Generic (PLEG): container finished" podID="706c5c2d-72d9-4322-8c88-32220831a907" containerID="a568975c21e3cfaf4770ca57d2dd08587c04d57bedb618333ad8ed302df228e1" exitCode=0 Oct 14 11:25:21 crc kubenswrapper[4698]: I1014 11:25:21.800504 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khv2b/must-gather-cmb56" event={"ID":"706c5c2d-72d9-4322-8c88-32220831a907","Type":"ContainerDied","Data":"a568975c21e3cfaf4770ca57d2dd08587c04d57bedb618333ad8ed302df228e1"} Oct 14 11:25:21 crc kubenswrapper[4698]: I1014 11:25:21.801031 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xzztd" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="registry-server" containerID="cri-o://28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800" gracePeriod=2 Oct 14 11:25:21 crc kubenswrapper[4698]: I1014 11:25:21.801610 4698 scope.go:117] "RemoveContainer" containerID="a568975c21e3cfaf4770ca57d2dd08587c04d57bedb618333ad8ed302df228e1" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.471097 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-khv2b_must-gather-cmb56_706c5c2d-72d9-4322-8c88-32220831a907/gather/0.log" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.757348 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.814360 4698 generic.go:334] "Generic (PLEG): container finished" podID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerID="28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800" exitCode=0 Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.814692 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerDied","Data":"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800"} Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.814905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzztd" event={"ID":"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da","Type":"ContainerDied","Data":"c293677e9c8422a53dba2e24b92db9b19fe3827cba393aeae86fcaccb07f9861"} Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.815118 4698 scope.go:117] "RemoveContainer" containerID="28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.815360 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzztd" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.836632 4698 scope.go:117] "RemoveContainer" containerID="1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.856497 4698 scope.go:117] "RemoveContainer" containerID="54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.869005 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdf5v\" (UniqueName: \"kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v\") pod \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.869135 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content\") pod \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.869268 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities\") pod \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\" (UID: \"9ab0cc61-05c1-4d2a-a480-7c0d34aab3da\") " Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.870649 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities" (OuterVolumeSpecName: "utilities") pod "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" (UID: "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.874950 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v" (OuterVolumeSpecName: "kube-api-access-sdf5v") pod "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" (UID: "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da"). InnerVolumeSpecName "kube-api-access-sdf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.965553 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" (UID: "9ab0cc61-05c1-4d2a-a480-7c0d34aab3da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.969933 4698 scope.go:117] "RemoveContainer" containerID="28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800" Oct 14 11:25:22 crc kubenswrapper[4698]: E1014 11:25:22.970282 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800\": container with ID starting with 28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800 not found: ID does not exist" containerID="28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.970320 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800"} err="failed to get container status \"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800\": rpc error: code = NotFound desc = could not find container \"28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800\": container with ID starting with 28e928c23bddd404a534e889953bdeb604a43648ef8754d873f7533ce9e26800 not found: ID does not exist" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.970341 4698 scope.go:117] "RemoveContainer" containerID="1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f" Oct 14 11:25:22 crc kubenswrapper[4698]: E1014 11:25:22.970758 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f\": container with ID starting with 1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f not found: ID does not exist" containerID="1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.970815 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f"} err="failed to get container status \"1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f\": rpc error: code = NotFound desc = could not find container \"1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f\": container with ID starting with 1fa49d4b392ca30da023ab2098bf8bd82bafdf6de2239f8582eddcf61ca39c8f not found: ID does not exist" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.970835 4698 scope.go:117] "RemoveContainer" containerID="54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9" Oct 14 11:25:22 crc kubenswrapper[4698]: E1014 11:25:22.971039 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9\": container with ID starting with 54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9 not found: ID does not exist" containerID="54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.971062 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9"} err="failed to get container status \"54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9\": rpc error: code = NotFound desc = could not find container \"54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9\": container with ID starting with 54cd3a32f1716d90afda6a139782b63d7cb0400ae962e54fe50e04aa8fbb36e9 not found: ID does not exist" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.971390 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdf5v\" (UniqueName: \"kubernetes.io/projected/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-kube-api-access-sdf5v\") on node \"crc\" DevicePath \"\"" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.971446 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:25:22 crc kubenswrapper[4698]: I1014 11:25:22.971460 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:25:23 crc kubenswrapper[4698]: I1014 11:25:23.150693 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:23 crc kubenswrapper[4698]: I1014 11:25:23.166338 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xzztd"] Oct 14 11:25:25 crc kubenswrapper[4698]: I1014 11:25:25.027465 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" path="/var/lib/kubelet/pods/9ab0cc61-05c1-4d2a-a480-7c0d34aab3da/volumes" Oct 14 11:25:32 crc kubenswrapper[4698]: I1014 11:25:32.488505 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-khv2b/must-gather-cmb56"] Oct 14 11:25:32 crc kubenswrapper[4698]: I1014 11:25:32.489444 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-khv2b/must-gather-cmb56" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="copy" containerID="cri-o://90199de37cfc4c2debca19a68ecfc9666289d48e1dd3d34b3bb5a629c9e5b041" gracePeriod=2 Oct 14 11:25:32 crc kubenswrapper[4698]: I1014 11:25:32.504681 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-khv2b/must-gather-cmb56"] Oct 14 11:25:32 crc kubenswrapper[4698]: I1014 11:25:32.912975 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-khv2b_must-gather-cmb56_706c5c2d-72d9-4322-8c88-32220831a907/copy/0.log" Oct 14 11:25:32 crc kubenswrapper[4698]: I1014 11:25:32.913664 4698 generic.go:334] "Generic (PLEG): container finished" podID="706c5c2d-72d9-4322-8c88-32220831a907" containerID="90199de37cfc4c2debca19a68ecfc9666289d48e1dd3d34b3bb5a629c9e5b041" exitCode=143 Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.067711 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-khv2b_must-gather-cmb56_706c5c2d-72d9-4322-8c88-32220831a907/copy/0.log" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.068291 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.205119 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jhdh\" (UniqueName: \"kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh\") pod \"706c5c2d-72d9-4322-8c88-32220831a907\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.205274 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output\") pod \"706c5c2d-72d9-4322-8c88-32220831a907\" (UID: \"706c5c2d-72d9-4322-8c88-32220831a907\") " Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.212919 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh" (OuterVolumeSpecName: "kube-api-access-7jhdh") pod "706c5c2d-72d9-4322-8c88-32220831a907" (UID: "706c5c2d-72d9-4322-8c88-32220831a907"). InnerVolumeSpecName "kube-api-access-7jhdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.308017 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jhdh\" (UniqueName: \"kubernetes.io/projected/706c5c2d-72d9-4322-8c88-32220831a907-kube-api-access-7jhdh\") on node \"crc\" DevicePath \"\"" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.385452 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "706c5c2d-72d9-4322-8c88-32220831a907" (UID: "706c5c2d-72d9-4322-8c88-32220831a907"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.410203 4698 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/706c5c2d-72d9-4322-8c88-32220831a907-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.924402 4698 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-khv2b_must-gather-cmb56_706c5c2d-72d9-4322-8c88-32220831a907/copy/0.log" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.925001 4698 scope.go:117] "RemoveContainer" containerID="90199de37cfc4c2debca19a68ecfc9666289d48e1dd3d34b3bb5a629c9e5b041" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.925063 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khv2b/must-gather-cmb56" Oct 14 11:25:33 crc kubenswrapper[4698]: I1014 11:25:33.953358 4698 scope.go:117] "RemoveContainer" containerID="a568975c21e3cfaf4770ca57d2dd08587c04d57bedb618333ad8ed302df228e1" Oct 14 11:25:34 crc kubenswrapper[4698]: I1014 11:25:34.017216 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:25:34 crc kubenswrapper[4698]: E1014 11:25:34.017725 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:25:35 crc kubenswrapper[4698]: I1014 11:25:35.032248 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706c5c2d-72d9-4322-8c88-32220831a907" path="/var/lib/kubelet/pods/706c5c2d-72d9-4322-8c88-32220831a907/volumes" Oct 14 11:25:48 crc kubenswrapper[4698]: I1014 11:25:48.017931 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:25:48 crc kubenswrapper[4698]: E1014 11:25:48.018631 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:25:59 crc kubenswrapper[4698]: I1014 11:25:59.025729 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:25:59 crc kubenswrapper[4698]: E1014 11:25:59.027279 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:26:12 crc kubenswrapper[4698]: I1014 11:26:12.017603 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:26:12 crc kubenswrapper[4698]: E1014 11:26:12.018562 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.629063 4698 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:21 crc kubenswrapper[4698]: E1014 11:26:21.630167 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="extract-utilities" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630289 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="extract-utilities" Oct 14 11:26:21 crc kubenswrapper[4698]: E1014 11:26:21.630309 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="extract-content" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630317 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="extract-content" Oct 14 11:26:21 crc kubenswrapper[4698]: E1014 11:26:21.630337 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="copy" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630345 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="copy" Oct 14 11:26:21 crc kubenswrapper[4698]: E1014 11:26:21.630361 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="registry-server" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630369 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="registry-server" Oct 14 11:26:21 crc kubenswrapper[4698]: E1014 11:26:21.630386 4698 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="gather" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630392 4698 state_mem.go:107] "Deleted CPUSet assignment" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="gather" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630632 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="gather" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630650 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="706c5c2d-72d9-4322-8c88-32220831a907" containerName="copy" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.630660 4698 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab0cc61-05c1-4d2a-a480-7c0d34aab3da" containerName="registry-server" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.632410 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.654806 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.692261 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.693004 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.794246 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.794292 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.794400 4698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkq8m\" (UniqueName: \"kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.794856 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.794868 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.896107 4698 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkq8m\" (UniqueName: \"kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.923377 4698 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkq8m\" (UniqueName: \"kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m\") pod \"certified-operators-pvnvz\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:21 crc kubenswrapper[4698]: I1014 11:26:21.973817 4698 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:22 crc kubenswrapper[4698]: I1014 11:26:22.544225 4698 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:23 crc kubenswrapper[4698]: W1014 11:26:23.083238 4698 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd50e6143_7470_442c_844b_3f49e3d81855.slice/crio-82788e554ae85ee37839d1e72586681a0a164f9719f15d402abae76df5f625bb WatchSource:0}: Error finding container 82788e554ae85ee37839d1e72586681a0a164f9719f15d402abae76df5f625bb: Status 404 returned error can't find the container with id 82788e554ae85ee37839d1e72586681a0a164f9719f15d402abae76df5f625bb Oct 14 11:26:23 crc kubenswrapper[4698]: I1014 11:26:23.387407 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerStarted","Data":"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a"} Oct 14 11:26:23 crc kubenswrapper[4698]: I1014 11:26:23.387797 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerStarted","Data":"82788e554ae85ee37839d1e72586681a0a164f9719f15d402abae76df5f625bb"} Oct 14 11:26:24 crc kubenswrapper[4698]: I1014 11:26:24.399207 4698 generic.go:334] "Generic (PLEG): container finished" podID="d50e6143-7470-442c-844b-3f49e3d81855" containerID="7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a" exitCode=0 Oct 14 11:26:24 crc kubenswrapper[4698]: I1014 11:26:24.399566 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerDied","Data":"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a"} Oct 14 11:26:25 crc kubenswrapper[4698]: I1014 11:26:25.017992 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:26:25 crc kubenswrapper[4698]: E1014 11:26:25.018502 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:26:26 crc kubenswrapper[4698]: I1014 11:26:26.418940 4698 generic.go:334] "Generic (PLEG): container finished" podID="d50e6143-7470-442c-844b-3f49e3d81855" containerID="47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4" exitCode=0 Oct 14 11:26:26 crc kubenswrapper[4698]: I1014 11:26:26.419324 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerDied","Data":"47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4"} Oct 14 11:26:27 crc kubenswrapper[4698]: I1014 11:26:27.432717 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerStarted","Data":"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b"} Oct 14 11:26:27 crc kubenswrapper[4698]: I1014 11:26:27.459904 4698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvnvz" podStartSLOduration=3.780564255 podStartE2EDuration="6.459876193s" podCreationTimestamp="2025-10-14 11:26:21 +0000 UTC" firstStartedPulling="2025-10-14 11:26:24.40134485 +0000 UTC m=+5366.098644266" lastFinishedPulling="2025-10-14 11:26:27.080656788 +0000 UTC m=+5368.777956204" observedRunningTime="2025-10-14 11:26:27.452182295 +0000 UTC m=+5369.149481741" watchObservedRunningTime="2025-10-14 11:26:27.459876193 +0000 UTC m=+5369.157175609" Oct 14 11:26:31 crc kubenswrapper[4698]: I1014 11:26:31.975161 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:31 crc kubenswrapper[4698]: I1014 11:26:31.975795 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:32 crc kubenswrapper[4698]: I1014 11:26:32.519929 4698 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:32 crc kubenswrapper[4698]: I1014 11:26:32.592221 4698 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:32 crc kubenswrapper[4698]: I1014 11:26:32.759722 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:34 crc kubenswrapper[4698]: I1014 11:26:34.496463 4698 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvnvz" podUID="d50e6143-7470-442c-844b-3f49e3d81855" containerName="registry-server" containerID="cri-o://89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b" gracePeriod=2 Oct 14 11:26:34 crc kubenswrapper[4698]: I1014 11:26:34.959044 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.082326 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities\") pod \"d50e6143-7470-442c-844b-3f49e3d81855\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.082513 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkq8m\" (UniqueName: \"kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m\") pod \"d50e6143-7470-442c-844b-3f49e3d81855\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.082569 4698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content\") pod \"d50e6143-7470-442c-844b-3f49e3d81855\" (UID: \"d50e6143-7470-442c-844b-3f49e3d81855\") " Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.083663 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities" (OuterVolumeSpecName: "utilities") pod "d50e6143-7470-442c-844b-3f49e3d81855" (UID: "d50e6143-7470-442c-844b-3f49e3d81855"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.105896 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m" (OuterVolumeSpecName: "kube-api-access-vkq8m") pod "d50e6143-7470-442c-844b-3f49e3d81855" (UID: "d50e6143-7470-442c-844b-3f49e3d81855"). InnerVolumeSpecName "kube-api-access-vkq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.157575 4698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d50e6143-7470-442c-844b-3f49e3d81855" (UID: "d50e6143-7470-442c-844b-3f49e3d81855"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.185176 4698 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.185219 4698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkq8m\" (UniqueName: \"kubernetes.io/projected/d50e6143-7470-442c-844b-3f49e3d81855-kube-api-access-vkq8m\") on node \"crc\" DevicePath \"\"" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.185230 4698 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d50e6143-7470-442c-844b-3f49e3d81855-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.507860 4698 generic.go:334] "Generic (PLEG): container finished" podID="d50e6143-7470-442c-844b-3f49e3d81855" containerID="89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b" exitCode=0 Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.507905 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerDied","Data":"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b"} Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.507935 4698 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvnvz" event={"ID":"d50e6143-7470-442c-844b-3f49e3d81855","Type":"ContainerDied","Data":"82788e554ae85ee37839d1e72586681a0a164f9719f15d402abae76df5f625bb"} Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.507955 4698 scope.go:117] "RemoveContainer" containerID="89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.507968 4698 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvnvz" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.534684 4698 scope.go:117] "RemoveContainer" containerID="47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.550034 4698 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.560285 4698 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvnvz"] Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.569728 4698 scope.go:117] "RemoveContainer" containerID="7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.625167 4698 scope.go:117] "RemoveContainer" containerID="89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b" Oct 14 11:26:35 crc kubenswrapper[4698]: E1014 11:26:35.626232 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b\": container with ID starting with 89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b not found: ID does not exist" containerID="89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.626269 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b"} err="failed to get container status \"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b\": rpc error: code = NotFound desc = could not find container \"89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b\": container with ID starting with 89f3c53663b2133bf6e362b84dc0e38ab271820c0c52e7eba411c9efd45e499b not found: ID does not exist" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.626292 4698 scope.go:117] "RemoveContainer" containerID="47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4" Oct 14 11:26:35 crc kubenswrapper[4698]: E1014 11:26:35.626586 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4\": container with ID starting with 47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4 not found: ID does not exist" containerID="47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.626608 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4"} err="failed to get container status \"47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4\": rpc error: code = NotFound desc = could not find container \"47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4\": container with ID starting with 47030031d6bcbefed633c3f13f2bb8af6a4a9e863f991579d603015a0dc48ce4 not found: ID does not exist" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.626620 4698 scope.go:117] "RemoveContainer" containerID="7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a" Oct 14 11:26:35 crc kubenswrapper[4698]: E1014 11:26:35.626881 4698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a\": container with ID starting with 7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a not found: ID does not exist" containerID="7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a" Oct 14 11:26:35 crc kubenswrapper[4698]: I1014 11:26:35.626910 4698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a"} err="failed to get container status \"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a\": rpc error: code = NotFound desc = could not find container \"7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a\": container with ID starting with 7afd9bbef9073339dcca016da65df24a8902fd2e50edf40d91cfbdf3d8c54a9a not found: ID does not exist" Oct 14 11:26:37 crc kubenswrapper[4698]: I1014 11:26:37.017634 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:26:37 crc kubenswrapper[4698]: E1014 11:26:37.018254 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:26:37 crc kubenswrapper[4698]: I1014 11:26:37.029083 4698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d50e6143-7470-442c-844b-3f49e3d81855" path="/var/lib/kubelet/pods/d50e6143-7470-442c-844b-3f49e3d81855/volumes" Oct 14 11:26:50 crc kubenswrapper[4698]: I1014 11:26:50.017752 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:26:50 crc kubenswrapper[4698]: E1014 11:26:50.018449 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:27:05 crc kubenswrapper[4698]: I1014 11:27:05.017490 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:27:05 crc kubenswrapper[4698]: E1014 11:27:05.019651 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:27:12 crc kubenswrapper[4698]: I1014 11:27:12.984222 4698 scope.go:117] "RemoveContainer" containerID="64d2f086ca01369b63a1d88394c431f07248f778847950cbda4778b82a1966df" Oct 14 11:27:13 crc kubenswrapper[4698]: I1014 11:27:13.017486 4698 scope.go:117] "RemoveContainer" containerID="beb07ba4d251af4a2d6dd76412c8d6e10cc7c16d4881f8fb4115c1512e9ba7f3" Oct 14 11:27:20 crc kubenswrapper[4698]: I1014 11:27:20.017656 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:27:20 crc kubenswrapper[4698]: E1014 11:27:20.018783 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a" Oct 14 11:27:31 crc kubenswrapper[4698]: I1014 11:27:31.021536 4698 scope.go:117] "RemoveContainer" containerID="829b80e48d7614418a73fd61b6803927847ee048873cf9cd40e2478185b55345" Oct 14 11:27:31 crc kubenswrapper[4698]: E1014 11:27:31.022878 4698 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lp4sk_openshift-machine-config-operator(c359a8fc-1e2f-49af-8da2-719d52bd969a)\"" pod="openshift-machine-config-operator/machine-config-daemon-lp4sk" podUID="c359a8fc-1e2f-49af-8da2-719d52bd969a"